Test Report: Docker_Linux_crio_arm64 21997

                    
                      f52e7af1cf54d5c1b3af81f5f4f56bb8b0b6d6f9:2025-12-01:42595
                    
                

Test fail (57/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.39
44 TestAddons/parallel/Registry 18.17
45 TestAddons/parallel/RegistryCreds 0.61
46 TestAddons/parallel/Ingress 143.85
47 TestAddons/parallel/InspektorGadget 6.29
48 TestAddons/parallel/MetricsServer 5.41
50 TestAddons/parallel/CSI 30.51
51 TestAddons/parallel/Headlamp 3.39
52 TestAddons/parallel/CloudSpanner 5.34
53 TestAddons/parallel/LocalPath 8.47
54 TestAddons/parallel/NvidiaDevicePlugin 6.29
55 TestAddons/parallel/Yakd 6.27
106 TestFunctional/parallel/ServiceCmdConnect 603.66
134 TestFunctional/parallel/ServiceCmd/DeployApp 600.88
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
144 TestFunctional/parallel/ServiceCmd/Format 0.59
145 TestFunctional/parallel/ServiceCmd/URL 0.51
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.27
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.26
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.29
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 509.18
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.25
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.53
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.59
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.57
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 733.63
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.18
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.85
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.15
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.4
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.76
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 3.03
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.24
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.05
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.47
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.43
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.37
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.53
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.08
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.35
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.35
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.37
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.41
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.37
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.19
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 102.23
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.3
293 TestJSONOutput/pause/Command 1.87
299 TestJSONOutput/unpause/Command 2.08
358 TestKubernetesUpgrade 799.1
384 TestPause/serial/Pause 7.61
441 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7200.066
x
+
TestAddons/serial/Volcano (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable volcano --alsologtostderr -v=1: exit status 11 (389.158794ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:40:34.097560  492918 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:40:34.098476  492918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:40:34.098523  492918 out.go:374] Setting ErrFile to fd 2...
	I1201 20:40:34.098550  492918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:40:34.099227  492918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:40:34.099665  492918 mustload.go:66] Loading cluster: addons-947185
	I1201 20:40:34.100116  492918 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:40:34.100136  492918 addons.go:622] checking whether the cluster is paused
	I1201 20:40:34.100293  492918 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:40:34.100311  492918 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:40:34.100842  492918 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:40:34.122178  492918 ssh_runner.go:195] Run: systemctl --version
	I1201 20:40:34.122248  492918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:40:34.141855  492918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:40:34.272396  492918 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:40:34.272562  492918 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:40:34.312311  492918 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:40:34.312331  492918 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:40:34.312337  492918 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:40:34.312341  492918 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:40:34.312344  492918 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:40:34.312349  492918 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:40:34.312352  492918 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:40:34.312355  492918 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:40:34.312358  492918 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:40:34.312385  492918 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:40:34.312393  492918 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:40:34.312397  492918 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:40:34.312400  492918 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:40:34.312403  492918 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:40:34.312406  492918 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:40:34.312411  492918 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:40:34.312425  492918 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:40:34.312429  492918 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:40:34.312432  492918 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:40:34.312435  492918 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:40:34.312441  492918 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:40:34.312444  492918 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:40:34.312459  492918 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:40:34.312463  492918 cri.go:89] found id: ""
	I1201 20:40:34.312515  492918 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:40:34.342507  492918 out.go:203] 
	W1201 20:40:34.345556  492918 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:40:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:40:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:40:34.345583  492918 out.go:285] * 
	* 
	W1201 20:40:34.384703  492918 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:40:34.387933  492918 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.974813ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004377578s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003897968s
addons_test.go:392: (dbg) Run:  kubectl --context addons-947185 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-947185 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-947185 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.611234796s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 ip
2025/12/01 20:41:01 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable registry --alsologtostderr -v=1: exit status 11 (273.578015ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:01.677180  493540 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:01.677968  493540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:01.677983  493540 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:01.677989  493540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:01.678276  493540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:01.678586  493540 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:01.678992  493540 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:01.679014  493540 addons.go:622] checking whether the cluster is paused
	I1201 20:41:01.679127  493540 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:01.679182  493540 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:01.679754  493540 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:01.698278  493540 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:01.698344  493540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:01.721439  493540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:01.826231  493540 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:01.826334  493540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:01.858597  493540 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:01.858623  493540 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:01.858633  493540 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:01.858638  493540 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:01.858641  493540 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:01.858645  493540 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:01.858649  493540 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:01.858652  493540 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:01.858655  493540 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:01.858661  493540 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:01.858665  493540 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:01.858668  493540 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:01.858671  493540 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:01.858674  493540 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:01.858678  493540 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:01.858683  493540 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:01.858686  493540 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:01.858690  493540 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:01.858693  493540 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:01.858696  493540 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:01.858700  493540 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:01.858704  493540 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:01.858707  493540 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:01.858711  493540 cri.go:89] found id: ""
	I1201 20:41:01.858771  493540 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:01.876902  493540 out.go:203] 
	W1201 20:41:01.880079  493540 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:01.880108  493540 out.go:285] * 
	* 
	W1201 20:41:01.886857  493540 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:01.889923  493540 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (18.17s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.61s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.623123ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-947185
addons_test.go:332: (dbg) Run:  kubectl --context addons-947185 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (343.262464ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:29.638089  495072 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:29.639813  495072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:29.639876  495072 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:29.639899  495072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:29.640235  495072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:29.640579  495072 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:29.641010  495072 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:29.641048  495072 addons.go:622] checking whether the cluster is paused
	I1201 20:41:29.641195  495072 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:29.641225  495072 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:29.641813  495072 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:29.663048  495072 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:29.663103  495072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:29.692260  495072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:29.802040  495072 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:29.802133  495072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:29.868392  495072 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:29.868411  495072 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:29.868416  495072 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:29.868419  495072 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:29.868423  495072 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:29.868427  495072 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:29.868430  495072 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:29.868433  495072 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:29.868437  495072 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:29.868444  495072 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:29.868447  495072 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:29.868450  495072 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:29.868453  495072 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:29.868456  495072 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:29.868459  495072 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:29.868466  495072 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:29.868469  495072 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:29.868474  495072 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:29.868477  495072 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:29.868480  495072 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:29.868488  495072 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:29.868493  495072 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:29.868495  495072 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:29.868498  495072 cri.go:89] found id: ""
	I1201 20:41:29.868548  495072 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:29.887456  495072 out.go:203] 
	W1201 20:41:29.891534  495072 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:29.891573  495072 out.go:285] * 
	* 
	W1201 20:41:29.898743  495072 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:29.902975  495072 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.61s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-947185 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-947185 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-947185 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3db87fd7-f7cc-40ad-9fe2-6d2ae6fd1701] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3db87fd7-f7cc-40ad-9fe2-6d2ae6fd1701] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003447507s
I1201 20:41:37.917587  486002 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.863966845s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-947185 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-947185
helpers_test.go:243: (dbg) docker inspect addons-947185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9",
	        "Created": "2025-12-01T20:38:06.56379414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:38:06.632612263Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/hosts",
	        "LogPath": "/var/lib/docker/containers/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9-json.log",
	        "Name": "/addons-947185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-947185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-947185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9",
	                "LowerDir": "/var/lib/docker/overlay2/3dc9a77c3516cdaa521570b418a8a7608cf48ac01accb0d6dff10e3cf7bdc79a-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3dc9a77c3516cdaa521570b418a8a7608cf48ac01accb0d6dff10e3cf7bdc79a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3dc9a77c3516cdaa521570b418a8a7608cf48ac01accb0d6dff10e3cf7bdc79a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3dc9a77c3516cdaa521570b418a8a7608cf48ac01accb0d6dff10e3cf7bdc79a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-947185",
	                "Source": "/var/lib/docker/volumes/addons-947185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-947185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-947185",
	                "name.minikube.sigs.k8s.io": "addons-947185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "129e0dff262722110da0027a2bfe8f668c0ba8048f6336541d3e5766568353a5",
	            "SandboxKey": "/var/run/docker/netns/129e0dff2627",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-947185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:89:88:7d:b0:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dfb6f6fd26bfa307a0d061351931e04913ad57ce59ce0f7157642befa78f7126",
	                    "EndpointID": "3db396a9fc8c3d1334e7d458d576fdcc66e89939e3a97c9f93373fc684af7947",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-947185",
	                        "1e76f4910660"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-947185 -n addons-947185
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-947185 logs -n 25: (1.526338561s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-074980                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-074980 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ start   │ --download-only -p binary-mirror-986055 --alsologtostderr --binary-mirror http://127.0.0.1:41593 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-986055   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ delete  │ -p binary-mirror-986055                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-986055   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ addons  │ enable dashboard -p addons-947185                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ addons  │ disable dashboard -p addons-947185                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ start   │ -p addons-947185 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:40 UTC │
	│ addons  │ addons-947185 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:40 UTC │                     │
	│ addons  │ addons-947185 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:40 UTC │                     │
	│ addons  │ addons-947185 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:40 UTC │                     │
	│ ip      │ addons-947185 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │ 01 Dec 25 20:41 UTC │
	│ addons  │ addons-947185 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ ssh     │ addons-947185 ssh cat /opt/local-path-provisioner/pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │ 01 Dec 25 20:41 UTC │
	│ addons  │ addons-947185 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ enable headlamp -p addons-947185 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-947185                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │ 01 Dec 25 20:41 UTC │
	│ addons  │ addons-947185 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ ssh     │ addons-947185 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ ip      │ addons-947185 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:43 UTC │ 01 Dec 25 20:43 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:37:59
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:37:59.032536  487012 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:37:59.032739  487012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:59.032773  487012 out.go:374] Setting ErrFile to fd 2...
	I1201 20:37:59.032796  487012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:59.033098  487012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:37:59.033637  487012 out.go:368] Setting JSON to false
	I1201 20:37:59.034537  487012 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8428,"bootTime":1764613051,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 20:37:59.034646  487012 start.go:143] virtualization:  
	I1201 20:37:59.038483  487012 out.go:179] * [addons-947185] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 20:37:59.041855  487012 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:37:59.041911  487012 notify.go:221] Checking for updates...
	I1201 20:37:59.048094  487012 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:37:59.051164  487012 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:37:59.054238  487012 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 20:37:59.057307  487012 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 20:37:59.060284  487012 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:37:59.063596  487012 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:37:59.088630  487012 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 20:37:59.088777  487012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:59.164563  487012 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-01 20:37:59.155025879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:59.164682  487012 docker.go:319] overlay module found
	I1201 20:37:59.168009  487012 out.go:179] * Using the docker driver based on user configuration
	I1201 20:37:59.171038  487012 start.go:309] selected driver: docker
	I1201 20:37:59.171064  487012 start.go:927] validating driver "docker" against <nil>
	I1201 20:37:59.171079  487012 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:37:59.171998  487012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:59.229266  487012 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-01 20:37:59.219876105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:59.229440  487012 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 20:37:59.229691  487012 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:37:59.232655  487012 out.go:179] * Using Docker driver with root privileges
	I1201 20:37:59.235606  487012 cni.go:84] Creating CNI manager for ""
	I1201 20:37:59.235690  487012 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:37:59.235706  487012 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 20:37:59.235782  487012 start.go:353] cluster config:
	{Name:addons-947185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1201 20:37:59.238989  487012 out.go:179] * Starting "addons-947185" primary control-plane node in "addons-947185" cluster
	I1201 20:37:59.241778  487012 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:37:59.244825  487012 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:37:59.247583  487012 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:37:59.247635  487012 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1201 20:37:59.247646  487012 cache.go:65] Caching tarball of preloaded images
	I1201 20:37:59.247751  487012 preload.go:238] Found /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1201 20:37:59.247766  487012 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:37:59.248127  487012 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/config.json ...
	I1201 20:37:59.248152  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/config.json: {Name:mk58b9e23075d2dc4424a61d1dac09e84405f00d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:37:59.248324  487012 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:37:59.269868  487012 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:37:59.269889  487012 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:37:59.269911  487012 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:37:59.269945  487012 start.go:360] acquireMachinesLock for addons-947185: {Name:mkc87eceafa2be40884bb90866de997784cee8a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:37:59.270055  487012 start.go:364] duration metric: took 94.866µs to acquireMachinesLock for "addons-947185"
	I1201 20:37:59.270082  487012 start.go:93] Provisioning new machine with config: &{Name:addons-947185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:37:59.270152  487012 start.go:125] createHost starting for "" (driver="docker")
	I1201 20:37:59.273672  487012 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1201 20:37:59.274019  487012 start.go:159] libmachine.API.Create for "addons-947185" (driver="docker")
	I1201 20:37:59.274069  487012 client.go:173] LocalClient.Create starting
	I1201 20:37:59.274213  487012 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem
	I1201 20:37:59.795077  487012 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem
	I1201 20:38:00.020385  487012 cli_runner.go:164] Run: docker network inspect addons-947185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1201 20:38:00.056043  487012 cli_runner.go:211] docker network inspect addons-947185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1201 20:38:00.056159  487012 network_create.go:284] running [docker network inspect addons-947185] to gather additional debugging logs...
	I1201 20:38:00.056180  487012 cli_runner.go:164] Run: docker network inspect addons-947185
	W1201 20:38:00.079950  487012 cli_runner.go:211] docker network inspect addons-947185 returned with exit code 1
	I1201 20:38:00.079983  487012 network_create.go:287] error running [docker network inspect addons-947185]: docker network inspect addons-947185: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-947185 not found
	I1201 20:38:00.080000  487012 network_create.go:289] output of [docker network inspect addons-947185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-947185 not found
	
	** /stderr **
	I1201 20:38:00.080111  487012 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:38:00.122250  487012 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001af0060}
	I1201 20:38:00.122304  487012 network_create.go:124] attempt to create docker network addons-947185 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1201 20:38:00.122370  487012 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-947185 addons-947185
	I1201 20:38:00.267581  487012 network_create.go:108] docker network addons-947185 192.168.49.0/24 created
	I1201 20:38:00.267623  487012 kic.go:121] calculated static IP "192.168.49.2" for the "addons-947185" container
	I1201 20:38:00.267751  487012 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1201 20:38:00.321128  487012 cli_runner.go:164] Run: docker volume create addons-947185 --label name.minikube.sigs.k8s.io=addons-947185 --label created_by.minikube.sigs.k8s.io=true
	I1201 20:38:00.355468  487012 oci.go:103] Successfully created a docker volume addons-947185
	I1201 20:38:00.355571  487012 cli_runner.go:164] Run: docker run --rm --name addons-947185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-947185 --entrypoint /usr/bin/test -v addons-947185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1201 20:38:02.356265  487012 cli_runner.go:217] Completed: docker run --rm --name addons-947185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-947185 --entrypoint /usr/bin/test -v addons-947185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (2.000652563s)
	I1201 20:38:02.356301  487012 oci.go:107] Successfully prepared a docker volume addons-947185
	I1201 20:38:02.356351  487012 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:38:02.356369  487012 kic.go:194] Starting extracting preloaded images to volume ...
	I1201 20:38:02.356452  487012 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-947185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1201 20:38:06.465652  487012 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-947185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.109156789s)
	I1201 20:38:06.465706  487012 kic.go:203] duration metric: took 4.109333294s to extract preloaded images to volume ...
	W1201 20:38:06.465864  487012 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1201 20:38:06.466006  487012 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1201 20:38:06.544261  487012 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-947185 --name addons-947185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-947185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-947185 --network addons-947185 --ip 192.168.49.2 --volume addons-947185:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1201 20:38:06.866643  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Running}}
	I1201 20:38:06.891394  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:06.919940  487012 cli_runner.go:164] Run: docker exec addons-947185 stat /var/lib/dpkg/alternatives/iptables
	I1201 20:38:06.982784  487012 oci.go:144] the created container "addons-947185" has a running status.
	I1201 20:38:06.982826  487012 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa...
	I1201 20:38:07.296985  487012 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1201 20:38:07.326206  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:07.351731  487012 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1201 20:38:07.351759  487012 kic_runner.go:114] Args: [docker exec --privileged addons-947185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1201 20:38:07.420675  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:07.440392  487012 machine.go:94] provisionDockerMachine start ...
	I1201 20:38:07.440508  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:07.460304  487012 main.go:143] libmachine: Using SSH client type: native
	I1201 20:38:07.460655  487012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1201 20:38:07.460673  487012 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:38:07.461347  487012 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1201 20:38:10.615211  487012 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-947185
	
	I1201 20:38:10.615240  487012 ubuntu.go:182] provisioning hostname "addons-947185"
	I1201 20:38:10.615315  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:10.635497  487012 main.go:143] libmachine: Using SSH client type: native
	I1201 20:38:10.635869  487012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1201 20:38:10.635891  487012 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-947185 && echo "addons-947185" | sudo tee /etc/hostname
	I1201 20:38:10.797148  487012 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-947185
	
	I1201 20:38:10.797294  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:10.815791  487012 main.go:143] libmachine: Using SSH client type: native
	I1201 20:38:10.816114  487012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1201 20:38:10.816135  487012 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-947185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-947185/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-947185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:38:10.967886  487012 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:38:10.967916  487012 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 20:38:10.967952  487012 ubuntu.go:190] setting up certificates
	I1201 20:38:10.967964  487012 provision.go:84] configureAuth start
	I1201 20:38:10.968048  487012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-947185
	I1201 20:38:10.987490  487012 provision.go:143] copyHostCerts
	I1201 20:38:10.987583  487012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 20:38:10.987721  487012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 20:38:10.987795  487012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 20:38:10.987865  487012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.addons-947185 san=[127.0.0.1 192.168.49.2 addons-947185 localhost minikube]
	I1201 20:38:11.349453  487012 provision.go:177] copyRemoteCerts
	I1201 20:38:11.349792  487012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:38:11.349842  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:11.368649  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:11.476349  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:38:11.498734  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 20:38:11.518709  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 20:38:11.538353  487012 provision.go:87] duration metric: took 570.360369ms to configureAuth
	I1201 20:38:11.538384  487012 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:38:11.538586  487012 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:38:11.538706  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:11.557923  487012 main.go:143] libmachine: Using SSH client type: native
	I1201 20:38:11.558272  487012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1201 20:38:11.558296  487012 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:38:12.028990  487012 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:38:12.029020  487012 machine.go:97] duration metric: took 4.588598345s to provisionDockerMachine
	I1201 20:38:12.029034  487012 client.go:176] duration metric: took 12.754955953s to LocalClient.Create
	I1201 20:38:12.029049  487012 start.go:167] duration metric: took 12.755033547s to libmachine.API.Create "addons-947185"
	I1201 20:38:12.029057  487012 start.go:293] postStartSetup for "addons-947185" (driver="docker")
	I1201 20:38:12.029070  487012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:38:12.029151  487012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:38:12.029202  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:12.048622  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:12.155635  487012 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:38:12.159243  487012 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:38:12.159272  487012 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:38:12.159286  487012 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 20:38:12.159361  487012 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 20:38:12.159396  487012 start.go:296] duration metric: took 130.331161ms for postStartSetup
	I1201 20:38:12.159723  487012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-947185
	I1201 20:38:12.177533  487012 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/config.json ...
	I1201 20:38:12.177834  487012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:38:12.177877  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:12.196999  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:12.300395  487012 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:38:12.305559  487012 start.go:128] duration metric: took 13.035391847s to createHost
	I1201 20:38:12.305583  487012 start.go:83] releasing machines lock for "addons-947185", held for 13.035518918s
	I1201 20:38:12.305669  487012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-947185
	I1201 20:38:12.324022  487012 ssh_runner.go:195] Run: cat /version.json
	I1201 20:38:12.324052  487012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:38:12.324086  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:12.324122  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:12.344755  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:12.346188  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:12.550810  487012 ssh_runner.go:195] Run: systemctl --version
	I1201 20:38:12.557835  487012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:38:12.604750  487012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:38:12.609711  487012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:38:12.609801  487012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:38:12.640367  487012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1201 20:38:12.640435  487012 start.go:496] detecting cgroup driver to use...
	I1201 20:38:12.640483  487012 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 20:38:12.640545  487012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:38:12.658833  487012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:38:12.673086  487012 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:38:12.673170  487012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:38:12.692207  487012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:38:12.711886  487012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:38:12.837966  487012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:38:13.001471  487012 docker.go:234] disabling docker service ...
	I1201 20:38:13.001643  487012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:38:13.028303  487012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:38:13.044294  487012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:38:13.163711  487012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:38:13.285594  487012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:38:13.299482  487012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:38:13.313674  487012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:38:13.313741  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.322544  487012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:38:13.322617  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.331623  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.341401  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.350386  487012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:38:13.359329  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.368209  487012 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.382426  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.391364  487012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:38:13.399240  487012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:38:13.406674  487012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:38:13.523096  487012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:38:13.715610  487012 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:38:13.715758  487012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:38:13.720097  487012 start.go:564] Will wait 60s for crictl version
	I1201 20:38:13.720212  487012 ssh_runner.go:195] Run: which crictl
	I1201 20:38:13.724628  487012 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:38:13.762132  487012 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:38:13.762248  487012 ssh_runner.go:195] Run: crio --version
	I1201 20:38:13.793729  487012 ssh_runner.go:195] Run: crio --version
	I1201 20:38:13.828489  487012 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:38:13.831439  487012 cli_runner.go:164] Run: docker network inspect addons-947185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:38:13.849480  487012 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 20:38:13.853899  487012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:38:13.865445  487012 kubeadm.go:884] updating cluster {Name:addons-947185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:38:13.865586  487012 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:38:13.865661  487012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:38:13.908880  487012 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:38:13.908912  487012 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:38:13.908978  487012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:38:13.938524  487012 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:38:13.938557  487012 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:38:13.938568  487012 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1201 20:38:13.938686  487012 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-947185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:38:13.938789  487012 ssh_runner.go:195] Run: crio config
	I1201 20:38:14.005872  487012 cni.go:84] Creating CNI manager for ""
	I1201 20:38:14.005904  487012 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:38:14.005926  487012 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:38:14.005963  487012 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-947185 NodeName:addons-947185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:38:14.006121  487012 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-947185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:38:14.006216  487012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:38:14.019189  487012 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:38:14.019268  487012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:38:14.028231  487012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1201 20:38:14.043240  487012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:38:14.057553  487012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1201 20:38:14.071510  487012 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:38:14.075477  487012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:38:14.085926  487012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:38:14.212078  487012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:38:14.230213  487012 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185 for IP: 192.168.49.2
	I1201 20:38:14.230239  487012 certs.go:195] generating shared ca certs ...
	I1201 20:38:14.230258  487012 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:14.230403  487012 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 20:38:14.800855  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt ...
	I1201 20:38:14.800891  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt: {Name:mk9fe7877c71b72180f6a27d4f902ec2ec04e60b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:14.801096  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key ...
	I1201 20:38:14.801111  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key: {Name:mk899a0cc35d8e3bbf101a27d9c94b28eb4fb86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:14.801201  487012 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 20:38:15.074083  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt ...
	I1201 20:38:15.074118  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt: {Name:mk99934bf2ab6aeba7185d55ef0520915ada0c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.074321  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key ...
	I1201 20:38:15.074335  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key: {Name:mk9662579547bce463933ce154561132d5c1876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.074427  487012 certs.go:257] generating profile certs ...
	I1201 20:38:15.074489  487012 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.key
	I1201 20:38:15.074507  487012 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt with IP's: []
	I1201 20:38:15.427314  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt ...
	I1201 20:38:15.427349  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: {Name:mkef1fab619944026e5c7d4ee81bc92ca8d90c44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.427547  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.key ...
	I1201 20:38:15.427562  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.key: {Name:mka5a6f220819854db9e95d3a642773ca88b1d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.427655  487012 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key.b13e0a5a
	I1201 20:38:15.427679  487012 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt.b13e0a5a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1201 20:38:15.805369  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt.b13e0a5a ...
	I1201 20:38:15.805402  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt.b13e0a5a: {Name:mk7b8677589e1b0f0cdcce61d3108c968878641d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.805597  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key.b13e0a5a ...
	I1201 20:38:15.805622  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key.b13e0a5a: {Name:mkc2d0134106a2ede954dfff9ec2d5e3ac522f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.805734  487012 certs.go:382] copying /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt.b13e0a5a -> /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt
	I1201 20:38:15.805822  487012 certs.go:386] copying /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key.b13e0a5a -> /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key
	I1201 20:38:15.805877  487012 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.key
	I1201 20:38:15.805898  487012 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.crt with IP's: []
	I1201 20:38:15.959077  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.crt ...
	I1201 20:38:15.959113  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.crt: {Name:mk4a47b4c9f51d8fb32f63b7ed11d2b04d887b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.959313  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.key ...
	I1201 20:38:15.959326  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.key: {Name:mk68e6758ea083b22239fcdf82e2a70a6d38c3c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.959550  487012 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:38:15.959600  487012 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:38:15.959639  487012 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:38:15.959671  487012 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 20:38:15.960326  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:38:15.980671  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:38:16.001042  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:38:16.026386  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 20:38:16.047096  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 20:38:16.066948  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:38:16.085863  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:38:16.105096  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:38:16.125404  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:38:16.144692  487012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:38:16.158897  487012 ssh_runner.go:195] Run: openssl version
	I1201 20:38:16.165528  487012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:38:16.174622  487012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:38:16.178731  487012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:38:16.178813  487012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:38:16.222043  487012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:38:16.230916  487012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:38:16.234913  487012 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:38:16.234975  487012 kubeadm.go:401] StartCluster: {Name:addons-947185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:38:16.235058  487012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:38:16.235118  487012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:38:16.267866  487012 cri.go:89] found id: ""
	I1201 20:38:16.267956  487012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:38:16.276455  487012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:38:16.284895  487012 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 20:38:16.284973  487012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:38:16.293378  487012 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:38:16.293403  487012 kubeadm.go:158] found existing configuration files:
	
	I1201 20:38:16.293462  487012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 20:38:16.301876  487012 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:38:16.301977  487012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:38:16.310590  487012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 20:38:16.320944  487012 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:38:16.321018  487012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:38:16.329419  487012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 20:38:16.337926  487012 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:38:16.338003  487012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:38:16.346135  487012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 20:38:16.354244  487012 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:38:16.354321  487012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:38:16.362535  487012 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 20:38:16.410121  487012 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 20:38:16.410519  487012 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:38:16.436356  487012 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 20:38:16.436433  487012 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 20:38:16.436475  487012 kubeadm.go:319] OS: Linux
	I1201 20:38:16.436525  487012 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 20:38:16.436579  487012 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 20:38:16.436634  487012 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 20:38:16.436708  487012 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 20:38:16.436761  487012 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 20:38:16.436813  487012 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 20:38:16.436863  487012 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 20:38:16.436914  487012 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 20:38:16.436965  487012 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 20:38:16.509998  487012 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:38:16.510119  487012 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:38:16.510222  487012 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:38:16.519480  487012 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 20:38:16.526032  487012 out.go:252]   - Generating certificates and keys ...
	I1201 20:38:16.526148  487012 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:38:16.526226  487012 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:38:16.767727  487012 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:38:17.814927  487012 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:38:18.239971  487012 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:38:18.753538  487012 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:38:19.249012  487012 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:38:19.249385  487012 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-947185 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 20:38:20.276815  487012 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:38:20.276969  487012 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-947185 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 20:38:21.123713  487012 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 20:38:21.344660  487012 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 20:38:22.338049  487012 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 20:38:22.338357  487012 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 20:38:22.673911  487012 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 20:38:23.114522  487012 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 20:38:24.256949  487012 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 20:38:24.540344  487012 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 20:38:24.980102  487012 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 20:38:24.981457  487012 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 20:38:24.985629  487012 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 20:38:24.989077  487012 out.go:252]   - Booting up control plane ...
	I1201 20:38:24.989190  487012 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 20:38:24.989270  487012 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 20:38:24.989955  487012 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 20:38:25.007804  487012 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 20:38:25.008196  487012 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 20:38:25.016918  487012 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 20:38:25.017247  487012 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 20:38:25.017294  487012 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 20:38:25.161354  487012 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 20:38:25.161477  487012 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 20:38:26.669814  487012 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.508117066s
	I1201 20:38:26.673746  487012 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 20:38:26.673977  487012 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1201 20:38:26.674076  487012 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 20:38:26.674158  487012 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 20:38:29.301212  487012 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.626385758s
	I1201 20:38:30.768029  487012 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.093588792s
	I1201 20:38:32.676282  487012 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001620812s
	I1201 20:38:32.717414  487012 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 20:38:32.733456  487012 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 20:38:32.753165  487012 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 20:38:32.753374  487012 kubeadm.go:319] [mark-control-plane] Marking the node addons-947185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 20:38:32.766123  487012 kubeadm.go:319] [bootstrap-token] Using token: v7wip5.2p795q6uvmytpget
	I1201 20:38:32.769051  487012 out.go:252]   - Configuring RBAC rules ...
	I1201 20:38:32.769174  487012 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 20:38:32.774620  487012 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 20:38:32.788469  487012 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 20:38:32.793892  487012 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 20:38:32.798826  487012 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 20:38:32.803900  487012 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 20:38:33.083783  487012 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 20:38:33.522675  487012 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 20:38:34.083697  487012 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 20:38:34.084866  487012 kubeadm.go:319] 
	I1201 20:38:34.084940  487012 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 20:38:34.084947  487012 kubeadm.go:319] 
	I1201 20:38:34.085023  487012 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 20:38:34.085028  487012 kubeadm.go:319] 
	I1201 20:38:34.085053  487012 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 20:38:34.085110  487012 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 20:38:34.085159  487012 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 20:38:34.085163  487012 kubeadm.go:319] 
	I1201 20:38:34.085216  487012 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 20:38:34.085221  487012 kubeadm.go:319] 
	I1201 20:38:34.085268  487012 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 20:38:34.085273  487012 kubeadm.go:319] 
	I1201 20:38:34.085323  487012 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 20:38:34.085396  487012 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 20:38:34.085463  487012 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 20:38:34.085468  487012 kubeadm.go:319] 
	I1201 20:38:34.085550  487012 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 20:38:34.085641  487012 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 20:38:34.085647  487012 kubeadm.go:319] 
	I1201 20:38:34.085727  487012 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token v7wip5.2p795q6uvmytpget \
	I1201 20:38:34.085827  487012 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ba416c8b9f9df321471bca98b9f543ca561a2f4cf5ae7c15e9cc221036e7ebbc \
	I1201 20:38:34.085848  487012 kubeadm.go:319] 	--control-plane 
	I1201 20:38:34.085853  487012 kubeadm.go:319] 
	I1201 20:38:34.085934  487012 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 20:38:34.085938  487012 kubeadm.go:319] 
	I1201 20:38:34.086030  487012 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token v7wip5.2p795q6uvmytpget \
	I1201 20:38:34.086141  487012 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ba416c8b9f9df321471bca98b9f543ca561a2f4cf5ae7c15e9cc221036e7ebbc 
	I1201 20:38:34.089144  487012 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1201 20:38:34.089382  487012 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 20:38:34.089490  487012 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 20:38:34.089511  487012 cni.go:84] Creating CNI manager for ""
	I1201 20:38:34.089521  487012 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:38:34.092802  487012 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 20:38:34.096042  487012 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 20:38:34.100679  487012 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1201 20:38:34.100707  487012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 20:38:34.115707  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 20:38:34.436805  487012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 20:38:34.436964  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:34.437047  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-947185 minikube.k8s.io/updated_at=2025_12_01T20_38_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=addons-947185 minikube.k8s.io/primary=true
	I1201 20:38:34.658322  487012 ops.go:34] apiserver oom_adj: -16
	I1201 20:38:34.658448  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:35.159027  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:35.658780  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:36.159075  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:36.658691  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:37.159118  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:37.659162  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:38.158788  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:38.659243  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:38.772336  487012 kubeadm.go:1114] duration metric: took 4.335422723s to wait for elevateKubeSystemPrivileges
	I1201 20:38:38.772363  487012 kubeadm.go:403] duration metric: took 22.537392775s to StartCluster
	I1201 20:38:38.772380  487012 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:38.772495  487012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:38:38.772935  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:38.773145  487012 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:38:38.773247  487012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1201 20:38:38.773485  487012 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:38:38.773515  487012 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1201 20:38:38.773584  487012 addons.go:70] Setting yakd=true in profile "addons-947185"
	I1201 20:38:38.773598  487012 addons.go:239] Setting addon yakd=true in "addons-947185"
	I1201 20:38:38.773624  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.774143  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.774719  487012 addons.go:70] Setting metrics-server=true in profile "addons-947185"
	I1201 20:38:38.774747  487012 addons.go:239] Setting addon metrics-server=true in "addons-947185"
	I1201 20:38:38.774760  487012 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-947185"
	I1201 20:38:38.774778  487012 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-947185"
	I1201 20:38:38.774781  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.774801  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.775254  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.775338  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.778447  487012 addons.go:70] Setting registry=true in profile "addons-947185"
	I1201 20:38:38.778590  487012 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-947185"
	I1201 20:38:38.778616  487012 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-947185"
	I1201 20:38:38.778657  487012 addons.go:239] Setting addon registry=true in "addons-947185"
	I1201 20:38:38.778682  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.778823  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.779002  487012 addons.go:70] Setting cloud-spanner=true in profile "addons-947185"
	I1201 20:38:38.779015  487012 addons.go:239] Setting addon cloud-spanner=true in "addons-947185"
	I1201 20:38:38.779033  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.779402  487012 addons.go:70] Setting registry-creds=true in profile "addons-947185"
	I1201 20:38:38.779427  487012 addons.go:239] Setting addon registry-creds=true in "addons-947185"
	I1201 20:38:38.779467  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.779722  487012 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-947185"
	I1201 20:38:38.779793  487012 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-947185"
	I1201 20:38:38.779814  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.779916  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.780658  487012 addons.go:70] Setting default-storageclass=true in profile "addons-947185"
	I1201 20:38:38.780687  487012 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-947185"
	I1201 20:38:38.782323  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.785392  487012 addons.go:70] Setting storage-provisioner=true in profile "addons-947185"
	I1201 20:38:38.785432  487012 addons.go:239] Setting addon storage-provisioner=true in "addons-947185"
	I1201 20:38:38.785477  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.786042  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.791711  487012 addons.go:70] Setting gcp-auth=true in profile "addons-947185"
	I1201 20:38:38.791763  487012 mustload.go:66] Loading cluster: addons-947185
	I1201 20:38:38.792005  487012 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:38:38.792335  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.803471  487012 addons.go:70] Setting ingress=true in profile "addons-947185"
	I1201 20:38:38.803510  487012 addons.go:239] Setting addon ingress=true in "addons-947185"
	I1201 20:38:38.803558  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.804059  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.817649  487012 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-947185"
	I1201 20:38:38.817699  487012 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-947185"
	I1201 20:38:38.818065  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.826867  487012 addons.go:70] Setting ingress-dns=true in profile "addons-947185"
	I1201 20:38:38.826908  487012 addons.go:239] Setting addon ingress-dns=true in "addons-947185"
	I1201 20:38:38.826951  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.827498  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.852356  487012 addons.go:70] Setting volcano=true in profile "addons-947185"
	I1201 20:38:38.852400  487012 addons.go:239] Setting addon volcano=true in "addons-947185"
	I1201 20:38:38.852439  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.852931  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.855504  487012 addons.go:70] Setting inspektor-gadget=true in profile "addons-947185"
	I1201 20:38:38.855549  487012 addons.go:239] Setting addon inspektor-gadget=true in "addons-947185"
	I1201 20:38:38.855590  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.858039  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.872866  487012 addons.go:70] Setting volumesnapshots=true in profile "addons-947185"
	I1201 20:38:38.872903  487012 addons.go:239] Setting addon volumesnapshots=true in "addons-947185"
	I1201 20:38:38.872938  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.873444  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.877476  487012 out.go:179] * Verifying Kubernetes components...
	I1201 20:38:38.881430  487012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:38:38.881996  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.902107  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.961632  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.984689  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:39.023304  487012 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1201 20:38:39.032302  487012 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1201 20:38:39.040118  487012 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1201 20:38:39.048282  487012 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:38:39.048368  487012 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 20:38:39.049700  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1201 20:38:39.049828  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.050096  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:39.048394  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1201 20:38:39.073885  487012 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1201 20:38:39.073998  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.092294  487012 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1201 20:38:39.098213  487012 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 20:38:39.098289  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1201 20:38:39.098390  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.101391  487012 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:38:39.101474  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:38:39.101570  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.113900  487012 addons.go:239] Setting addon default-storageclass=true in "addons-947185"
	I1201 20:38:39.113994  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:39.114504  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:39.123733  487012 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1201 20:38:39.124717  487012 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1201 20:38:39.124807  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	W1201 20:38:39.145027  487012 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1201 20:38:39.147609  487012 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-947185"
	I1201 20:38:39.147650  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:39.151200  487012 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 20:38:39.151431  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:39.152455  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1201 20:38:39.169858  487012 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1201 20:38:39.173554  487012 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1201 20:38:39.179310  487012 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 20:38:39.182998  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1201 20:38:39.183034  487012 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1201 20:38:39.183114  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.183465  487012 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 20:38:39.184428  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1201 20:38:39.184500  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.208646  487012 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 20:38:39.208671  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1201 20:38:39.208737  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.218507  487012 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1201 20:38:39.220350  487012 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1201 20:38:39.243250  487012 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1201 20:38:39.249670  487012 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1201 20:38:39.249754  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1201 20:38:39.249851  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.283666  487012 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1201 20:38:39.283703  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1201 20:38:39.283769  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.313160  487012 out.go:179]   - Using image docker.io/registry:3.0.0
	I1201 20:38:39.317191  487012 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1201 20:38:39.317217  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1201 20:38:39.317283  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.335366  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1201 20:38:39.335791  487012 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1201 20:38:39.347252  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1201 20:38:39.350652  487012 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 20:38:39.350679  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1201 20:38:39.350762  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.376613  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.386803  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.387427  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.388015  487012 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:38:39.388029  487012 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:38:39.388159  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.388427  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.389291  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1201 20:38:39.399453  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1201 20:38:39.403208  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1201 20:38:39.408906  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1201 20:38:39.417454  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.423810  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1201 20:38:39.427631  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1201 20:38:39.432852  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1201 20:38:39.432937  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1201 20:38:39.433052  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.446489  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.449680  487012 out.go:179]   - Using image docker.io/busybox:stable
	I1201 20:38:39.458362  487012 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1201 20:38:39.462630  487012 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 20:38:39.462658  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1201 20:38:39.462731  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.468728  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.480354  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.520690  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.539525  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.540379  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.568163  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	W1201 20:38:39.575283  487012 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1201 20:38:39.575402  487012 retry.go:31] will retry after 164.846186ms: ssh: handshake failed: EOF
	I1201 20:38:39.592670  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.597195  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.600622  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	W1201 20:38:39.602088  487012 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1201 20:38:39.602116  487012 retry.go:31] will retry after 356.967056ms: ssh: handshake failed: EOF
	W1201 20:38:39.742247  487012 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1201 20:38:39.742325  487012 retry.go:31] will retry after 316.520771ms: ssh: handshake failed: EOF
	I1201 20:38:39.781714  487012 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.008441997s)
	I1201 20:38:39.781750  487012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:38:39.781997  487012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1201 20:38:39.895670  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 20:38:40.032755  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:38:40.060811  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 20:38:40.076260  487012 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1201 20:38:40.076283  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1201 20:38:40.097730  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 20:38:40.216995  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 20:38:40.227519  487012 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1201 20:38:40.227596  487012 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1201 20:38:40.244856  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 20:38:40.269725  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1201 20:38:40.269805  487012 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1201 20:38:40.286245  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:38:40.286965  487012 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1201 20:38:40.286987  487012 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1201 20:38:40.289857  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1201 20:38:40.321434  487012 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 20:38:40.321457  487012 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1201 20:38:40.337236  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1201 20:38:40.388288  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1201 20:38:40.388317  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1201 20:38:40.407466  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1201 20:38:40.407492  487012 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1201 20:38:40.470163  487012 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1201 20:38:40.470189  487012 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1201 20:38:40.479253  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1201 20:38:40.479280  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1201 20:38:40.494526  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 20:38:40.570934  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1201 20:38:40.570961  487012 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1201 20:38:40.652009  487012 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1201 20:38:40.652034  487012 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1201 20:38:40.668790  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1201 20:38:40.668818  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1201 20:38:40.827007  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 20:38:40.862620  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1201 20:38:40.862652  487012 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1201 20:38:40.879507  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1201 20:38:40.879532  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1201 20:38:40.947374  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1201 20:38:40.947402  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1201 20:38:40.956733  487012 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1201 20:38:40.956762  487012 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1201 20:38:41.181069  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1201 20:38:41.247285  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1201 20:38:41.247317  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1201 20:38:41.284501  487012 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 20:38:41.284529  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1201 20:38:41.404154  487012 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1201 20:38:41.404186  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1201 20:38:41.516402  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 20:38:41.550983  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1201 20:38:41.551013  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1201 20:38:41.697137  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1201 20:38:41.876570  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1201 20:38:41.876598  487012 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1201 20:38:41.991387  487012 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.209361706s)
	I1201 20:38:41.991420  487012 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1201 20:38:41.992590  487012 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.210628692s)
	I1201 20:38:41.992708  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.097008775s)
	I1201 20:38:41.993576  487012 node_ready.go:35] waiting up to 6m0s for node "addons-947185" to be "Ready" ...
	I1201 20:38:42.248618  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1201 20:38:42.248646  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1201 20:38:42.440984  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1201 20:38:42.441009  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1201 20:38:42.499048  487012 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-947185" context rescaled to 1 replicas
	I1201 20:38:42.670216  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 20:38:42.670243  487012 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1201 20:38:42.805199  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 20:38:43.948104  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.915310656s)
	I1201 20:38:43.948237  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.887400524s)
	I1201 20:38:43.948292  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.850486574s)
	W1201 20:38:44.041915  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:45.782974  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.565940277s)
	I1201 20:38:45.783007  487012 addons.go:495] Verifying addon ingress=true in "addons-947185"
	I1201 20:38:45.783301  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.538314703s)
	I1201 20:38:45.783347  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.497027947s)
	I1201 20:38:45.783465  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.493581779s)
	I1201 20:38:45.783535  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.446268287s)
	I1201 20:38:45.783611  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.289060754s)
	I1201 20:38:45.783624  487012 addons.go:495] Verifying addon metrics-server=true in "addons-947185"
	I1201 20:38:45.783666  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.956634923s)
	I1201 20:38:45.783836  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.602741846s)
	I1201 20:38:45.786110  487012 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-947185 service yakd-dashboard -n yakd-dashboard
	
	I1201 20:38:45.786276  487012 out.go:179] * Verifying ingress addon...
	I1201 20:38:45.789897  487012 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1201 20:38:45.813459  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.29700949s)
	W1201 20:38:45.813497  487012 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 20:38:45.813518  487012 retry.go:31] will retry after 129.835211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 20:38:45.813566  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.116397207s)
	I1201 20:38:45.813580  487012 addons.go:495] Verifying addon registry=true in "addons-947185"
	I1201 20:38:45.816975  487012 out.go:179] * Verifying registry addon...
	I1201 20:38:45.820118  487012 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1201 20:38:45.820186  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:45.821402  487012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1201 20:38:45.829885  487012 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1201 20:38:45.851240  487012 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1201 20:38:45.851270  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:45.944082  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 20:38:46.201395  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.396146794s)
	I1201 20:38:46.201432  487012 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-947185"
	I1201 20:38:46.204494  487012 out.go:179] * Verifying csi-hostpath-driver addon...
	I1201 20:38:46.208175  487012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1201 20:38:46.229303  487012 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1201 20:38:46.229328  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:46.293011  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:46.324851  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1201 20:38:46.496893  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:46.711908  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:46.756086  487012 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1201 20:38:46.756201  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:46.774316  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:46.815967  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:46.824605  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:46.891053  487012 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1201 20:38:46.905981  487012 addons.go:239] Setting addon gcp-auth=true in "addons-947185"
	I1201 20:38:46.906060  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:46.906546  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:46.924154  487012 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1201 20:38:46.924218  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:46.942079  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:47.212405  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:47.293669  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:47.324729  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:47.712190  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:47.792994  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:47.824585  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:48.211746  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:48.293600  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:48.325275  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1201 20:38:48.501001  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:48.712247  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:48.754994  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.81086588s)
	I1201 20:38:48.755051  487012 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.830869689s)
	I1201 20:38:48.758407  487012 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 20:38:48.761406  487012 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1201 20:38:48.764689  487012 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1201 20:38:48.764721  487012 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1201 20:38:48.778289  487012 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1201 20:38:48.778315  487012 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1201 20:38:48.793657  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:48.794887  487012 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 20:38:48.794949  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1201 20:38:48.808769  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 20:38:48.825127  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:49.212143  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:49.303051  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:49.338486  487012 addons.go:495] Verifying addon gcp-auth=true in "addons-947185"
	I1201 20:38:49.340245  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:49.341873  487012 out.go:179] * Verifying gcp-auth addon...
	I1201 20:38:49.345670  487012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1201 20:38:49.430250  487012 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1201 20:38:49.430272  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:49.711409  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:49.793487  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:49.825499  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:49.849282  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:50.211260  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:50.293344  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:50.325229  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:50.348921  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:50.711628  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:50.793883  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:50.825574  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:50.849504  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:50.996861  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:51.211811  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:51.293961  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:51.324636  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:51.349395  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:51.712270  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:51.793177  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:51.824868  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:51.849512  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:52.211972  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:52.294509  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:52.324388  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:52.349519  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:52.711312  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:52.793472  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:52.825464  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:52.849448  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:53.212006  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:53.292885  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:53.325019  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:53.348648  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:53.496398  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:53.711863  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:53.792863  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:53.825479  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:53.849522  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:54.211688  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:54.293728  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:54.324863  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:54.348708  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:54.711727  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:54.793844  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:54.824171  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:54.849309  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:55.211431  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:55.293631  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:55.324742  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:55.349470  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:55.711800  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:55.793159  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:55.824965  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:55.848676  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:55.996545  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:56.211460  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:56.293288  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:56.325147  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:56.349024  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:56.712582  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:56.793844  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:56.824867  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:56.848910  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:57.211970  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:57.293468  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:57.325383  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:57.349194  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:57.710952  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:57.794076  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:57.825196  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:57.849845  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:57.996885  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:58.212073  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:58.293070  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:58.324973  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:58.349048  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:58.712013  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:58.794190  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:58.825206  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:58.848865  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:59.211935  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:59.293060  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:59.325026  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:59.349330  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:59.711862  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:59.794228  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:59.825018  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:59.848922  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:59.997713  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:00.213876  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:00.315949  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:00.330622  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:00.360562  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:00.712488  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:00.793440  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:00.824581  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:00.851104  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:01.212618  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:01.313800  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:01.324603  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:01.349958  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:01.712052  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:01.793356  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:01.825247  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:01.848950  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:02.211740  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:02.293944  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:02.324850  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:02.348877  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:02.496674  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:02.713035  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:02.793182  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:02.825046  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:02.848811  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:03.211831  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:03.292824  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:03.324898  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:03.348547  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:03.712336  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:03.793292  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:03.825568  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:03.849503  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:04.212262  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:04.296564  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:04.324739  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:04.349562  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:04.711658  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:04.793807  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:04.824926  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:04.848988  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:04.997066  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:05.211184  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:05.292981  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:05.324805  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:05.349741  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:05.711638  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:05.793523  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:05.825430  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:05.849512  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:06.211412  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:06.293255  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:06.325421  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:06.349395  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:06.711954  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:06.793196  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:06.824883  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:06.848830  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:07.211892  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:07.293028  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:07.325027  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:07.348891  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:07.496633  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:07.712237  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:07.800800  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:07.824829  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:07.849462  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:08.211639  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:08.294206  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:08.325359  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:08.349385  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:08.711967  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:08.794200  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:08.825828  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:08.849720  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:09.211319  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:09.294092  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:09.324649  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:09.348657  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:09.711929  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:09.793789  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:09.824700  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:09.849542  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:09.997966  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:10.211113  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:10.293170  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:10.325024  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:10.348920  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:10.711772  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:10.793721  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:10.824515  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:10.849260  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:11.211258  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:11.293851  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:11.324563  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:11.349465  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:11.711309  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:11.793712  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:11.824782  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:11.848796  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:11.998508  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:12.212207  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:12.293251  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:12.324892  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:12.348851  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:12.712426  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:12.793392  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:12.824246  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:12.849103  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:13.215102  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:13.292910  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:13.324891  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:13.348887  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:13.711550  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:13.794939  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:13.832891  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:13.853398  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:14.212315  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:14.293642  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:14.324754  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:14.348723  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:14.496912  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:14.712748  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:14.794185  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:14.825173  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:14.849400  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:15.211447  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:15.293401  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:15.324406  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:15.349315  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:15.711482  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:15.793575  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:15.824298  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:15.849268  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:16.211239  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:16.293325  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:16.325147  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:16.349040  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:16.712193  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:16.793287  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:16.825167  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:16.849420  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:16.996481  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:17.211611  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:17.293936  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:17.325067  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:17.348750  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:17.712015  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:17.793439  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:17.824655  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:17.848836  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:18.211732  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:18.293693  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:18.324346  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:18.349285  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:18.712130  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:18.793627  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:18.824525  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:18.849557  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:18.997824  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:19.211431  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:19.293426  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:19.324363  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:19.349628  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:19.711839  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:19.793641  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:19.824318  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:19.849648  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:20.218540  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:20.357102  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:20.362341  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:20.366082  487012 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1201 20:39:20.366106  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:20.501169  487012 node_ready.go:49] node "addons-947185" is "Ready"
	I1201 20:39:20.501200  487012 node_ready.go:38] duration metric: took 38.507586694s for node "addons-947185" to be "Ready" ...
	I1201 20:39:20.501215  487012 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:39:20.501299  487012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:39:20.521846  487012 api_server.go:72] duration metric: took 41.748664512s to wait for apiserver process to appear ...
	I1201 20:39:20.521874  487012 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:39:20.521897  487012 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1201 20:39:20.561317  487012 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1201 20:39:20.562595  487012 api_server.go:141] control plane version: v1.34.2
	I1201 20:39:20.562627  487012 api_server.go:131] duration metric: took 40.744794ms to wait for apiserver health ...
	I1201 20:39:20.562653  487012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:39:20.621706  487012 system_pods.go:59] 19 kube-system pods found
	I1201 20:39:20.621748  487012 system_pods.go:61] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:20.621756  487012 system_pods.go:61] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending
	I1201 20:39:20.621801  487012 system_pods.go:61] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending
	I1201 20:39:20.621807  487012 system_pods.go:61] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending
	I1201 20:39:20.621820  487012 system_pods.go:61] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:20.621824  487012 system_pods.go:61] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:20.621828  487012 system_pods.go:61] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:20.621832  487012 system_pods.go:61] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:20.621842  487012 system_pods.go:61] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending
	I1201 20:39:20.621860  487012 system_pods.go:61] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:20.621869  487012 system_pods.go:61] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:20.621873  487012 system_pods.go:61] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending
	I1201 20:39:20.621889  487012 system_pods.go:61] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending
	I1201 20:39:20.621900  487012 system_pods.go:61] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending
	I1201 20:39:20.621906  487012 system_pods.go:61] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:20.621910  487012 system_pods.go:61] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending
	I1201 20:39:20.621923  487012 system_pods.go:61] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending
	I1201 20:39:20.621927  487012 system_pods.go:61] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending
	I1201 20:39:20.621931  487012 system_pods.go:61] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending
	I1201 20:39:20.621937  487012 system_pods.go:74] duration metric: took 59.272542ms to wait for pod list to return data ...
	I1201 20:39:20.621945  487012 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:39:20.632112  487012 default_sa.go:45] found service account: "default"
	I1201 20:39:20.632139  487012 default_sa.go:55] duration metric: took 10.186391ms for default service account to be created ...
	I1201 20:39:20.632150  487012 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:39:20.656756  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:20.656796  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:20.656803  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending
	I1201 20:39:20.656808  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending
	I1201 20:39:20.656812  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending
	I1201 20:39:20.656816  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:20.656821  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:20.656825  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:20.656829  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:20.656859  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending
	I1201 20:39:20.656865  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:20.656877  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:20.656881  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending
	I1201 20:39:20.656885  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending
	I1201 20:39:20.656896  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending
	I1201 20:39:20.656903  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:20.656907  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending
	I1201 20:39:20.656912  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending
	I1201 20:39:20.656941  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending
	I1201 20:39:20.656952  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending
	I1201 20:39:20.656977  487012 retry.go:31] will retry after 283.580229ms: missing components: kube-dns
	I1201 20:39:20.747698  487012 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1201 20:39:20.747727  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:20.799585  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:20.827535  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:20.849618  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:20.949284  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:20.949324  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:20.949364  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 20:39:20.949379  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 20:39:20.949385  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending
	I1201 20:39:20.949395  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:20.949401  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:20.949405  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:20.949409  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:20.949433  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 20:39:20.949444  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:20.949450  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:20.949456  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 20:39:20.949465  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending
	I1201 20:39:20.949473  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 20:39:20.949483  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:20.949488  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending
	I1201 20:39:20.949496  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:20.949524  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:20.949544  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:39:20.949566  487012 retry.go:31] will retry after 364.771103ms: missing components: kube-dns
	I1201 20:39:21.217894  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:21.313014  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:21.413832  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:21.414210  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:21.416262  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:21.416300  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:21.416310  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 20:39:21.416319  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 20:39:21.416359  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 20:39:21.416365  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:21.416381  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:21.416386  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:21.416390  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:21.416396  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 20:39:21.416404  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:21.416437  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:21.416445  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 20:39:21.416456  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 20:39:21.416469  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 20:39:21.416475  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:21.416496  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 20:39:21.416503  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:21.416516  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:21.416602  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:39:21.416640  487012 retry.go:31] will retry after 330.733164ms: missing components: kube-dns
	I1201 20:39:21.711750  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:21.752189  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:21.752230  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:21.752240  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 20:39:21.752293  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 20:39:21.752300  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 20:39:21.752316  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:21.752342  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:21.752353  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:21.752360  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:21.752367  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 20:39:21.752376  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:21.752381  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:21.752388  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 20:39:21.752400  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 20:39:21.752419  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 20:39:21.752432  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:21.752438  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 20:39:21.752457  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:21.752470  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:21.752486  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:39:21.752506  487012 retry.go:31] will retry after 526.6024ms: missing components: kube-dns
	I1201 20:39:21.797469  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:21.830061  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:21.897398  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:22.213326  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:22.284210  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:22.284247  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Running
	I1201 20:39:22.284258  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 20:39:22.284299  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 20:39:22.284314  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 20:39:22.284321  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:22.284331  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:22.284335  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:22.284340  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:22.284379  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 20:39:22.284388  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:22.284393  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:22.284399  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 20:39:22.284411  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 20:39:22.284417  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 20:39:22.284430  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:22.284436  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 20:39:22.284461  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:22.284474  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:22.284479  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Running
	I1201 20:39:22.284502  487012 system_pods.go:126] duration metric: took 1.652344925s to wait for k8s-apps to be running ...
	I1201 20:39:22.284518  487012 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:39:22.284590  487012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:39:22.294469  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:22.304437  487012 system_svc.go:56] duration metric: took 19.910351ms WaitForService to wait for kubelet
	I1201 20:39:22.304467  487012 kubeadm.go:587] duration metric: took 43.531298863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:39:22.304487  487012 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:39:22.308464  487012 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1201 20:39:22.308496  487012 node_conditions.go:123] node cpu capacity is 2
	I1201 20:39:22.308511  487012 node_conditions.go:105] duration metric: took 4.018973ms to run NodePressure ...
	I1201 20:39:22.308551  487012 start.go:242] waiting for startup goroutines ...
	I1201 20:39:22.324648  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:22.348839  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:22.712170  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:22.793732  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:22.824946  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:22.850359  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:23.212287  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:23.293099  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:23.325200  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:23.349395  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:23.712046  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:23.793522  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:23.824898  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:23.849087  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:24.212404  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:24.294601  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:24.325096  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:24.349259  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:24.711719  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:24.793939  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:24.825453  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:24.850021  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:25.212901  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:25.293234  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:25.327049  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:25.349535  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:25.711699  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:25.793890  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:25.825025  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:25.848934  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:26.212110  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:26.294052  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:26.325665  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:26.350154  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:26.712403  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:26.793968  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:26.825535  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:26.850142  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:27.212324  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:27.293922  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:27.325477  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:27.350302  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:27.712480  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:27.794092  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:27.825728  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:27.849472  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:28.212965  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:28.293275  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:28.325331  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:28.349330  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:28.712079  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:28.792922  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:28.824789  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:28.848862  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:29.213562  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:29.297154  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:29.328186  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:29.351688  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:29.713294  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:29.794923  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:29.828444  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:29.855295  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:30.213379  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:30.294348  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:30.324903  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:30.349081  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:30.712125  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:30.794855  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:30.825599  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:30.849708  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:31.212315  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:31.293919  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:31.325497  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:31.350394  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:31.713341  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:31.793798  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:31.841633  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:31.859654  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:32.214117  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:32.293388  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:32.325515  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:32.349393  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:32.711747  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:32.794339  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:32.825122  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:32.872971  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:33.212219  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:33.293422  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:33.324931  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:33.350098  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:33.711682  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:33.793742  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:33.824508  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:33.849269  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:34.212111  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:34.293856  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:34.324598  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:34.349417  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:34.712049  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:34.793669  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:34.824877  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:34.849127  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:35.212041  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:35.294915  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:35.328186  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:35.356702  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:35.712430  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:35.793665  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:35.824270  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:35.849589  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:36.212394  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:36.293152  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:36.324845  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:36.348883  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:36.711510  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:36.794512  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:36.825925  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:36.849285  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:37.212142  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:37.292795  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:37.325046  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:37.349673  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:37.713474  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:37.794169  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:37.826485  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:37.849921  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:38.213810  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:38.295346  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:38.324941  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:38.349392  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:38.712305  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:38.801117  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:38.826102  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:38.849858  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:39.212143  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:39.298071  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:39.325613  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:39.351330  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:39.712541  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:39.793994  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:39.825242  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:39.849196  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:40.212245  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:40.293517  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:40.324472  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:40.349464  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:40.712201  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:40.793614  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:40.824821  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:40.848977  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:41.212285  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:41.293921  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:41.325178  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:41.349075  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:41.713645  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:41.793832  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:41.825182  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:41.849401  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:42.213216  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:42.293524  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:42.328387  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:42.349539  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:42.713254  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:42.793693  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:42.825328  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:42.849761  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:43.213307  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:43.293881  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:43.325590  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:43.349609  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:43.712638  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:43.793735  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:43.824903  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:43.848946  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:44.212479  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:44.297037  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:44.397996  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:44.398490  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:44.711955  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:44.793754  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:44.825151  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:44.849812  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:45.254158  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:45.299501  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:45.341664  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:45.349161  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:45.712756  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:45.794277  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:45.825398  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:45.849552  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:46.212790  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:46.294225  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:46.325431  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:46.349383  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:46.711568  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:46.793844  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:46.825079  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:46.849025  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:47.212621  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:47.294108  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:47.325564  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:47.349516  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:47.712718  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:47.794495  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:47.824698  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:47.848877  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:48.212242  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:48.293588  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:48.324799  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:48.348770  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:48.711925  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:48.793113  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:48.825551  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:48.849752  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:49.211948  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:49.293516  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:49.324662  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:49.349939  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:49.712367  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:49.794212  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:49.825538  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:49.849384  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:50.212410  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:50.293408  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:50.325488  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:50.349349  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:50.711653  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:50.793948  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:50.825219  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:50.849455  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:51.212748  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:51.293856  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:51.325377  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:51.354922  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:51.712659  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:51.813046  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:51.824948  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:51.849847  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:52.213031  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:52.313022  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:52.325053  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:52.348968  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:52.712768  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:52.794226  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:52.825526  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:52.849894  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:53.211701  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:53.293898  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:53.325531  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:53.349973  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:53.712637  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:53.812536  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:53.825519  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:53.849898  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:54.211860  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:54.293185  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:54.325105  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:54.349196  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:54.711967  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:54.793250  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:54.825446  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:54.849787  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:55.212312  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:55.293548  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:55.324634  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:55.348827  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:55.713339  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:55.793514  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:55.825738  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:55.849325  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:56.213083  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:56.293434  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:56.324567  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:56.348820  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:56.711393  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:56.793622  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:56.824757  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:56.848766  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:57.211996  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:57.292974  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:57.324736  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:57.348710  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:57.713215  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:57.797320  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:57.825107  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:57.849996  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:58.213587  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:58.293799  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:58.327215  487012 kapi.go:107] duration metric: took 1m12.505797288s to wait for kubernetes.io/minikube-addons=registry ...
	I1201 20:39:58.348709  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:58.712483  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:58.793735  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:58.848675  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:59.211792  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:59.293828  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:59.350162  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:59.711579  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:59.793973  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:59.850117  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:00.246747  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:00.325410  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:00.356437  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:00.715848  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:00.797718  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:00.849649  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:01.212625  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:01.295766  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:01.358171  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:01.713625  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:01.794160  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:01.850494  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:02.212034  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:02.294083  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:02.348909  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:02.713276  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:02.793632  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:02.851193  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:03.213344  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:03.294508  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:03.350030  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:03.721230  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:03.793974  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:03.849507  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:04.212835  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:04.293710  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:04.349285  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:04.713262  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:04.793419  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:04.849907  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:05.212597  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:05.315310  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:05.350248  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:05.714839  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:05.794246  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:05.850647  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:06.213370  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:06.314088  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:06.350656  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:06.712311  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:06.793727  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:06.849783  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:07.212555  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:07.293608  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:07.349152  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:07.712929  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:07.794356  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:07.849983  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:08.213561  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:08.293999  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:08.349519  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:08.712337  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:08.793845  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:08.849299  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:09.212589  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:09.302570  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:09.397000  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:09.714509  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:09.794440  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:09.849850  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:10.212958  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:10.293518  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:10.350197  487012 kapi.go:107] duration metric: took 1m21.004526803s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1201 20:40:10.353484  487012 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-947185 cluster.
	I1201 20:40:10.356695  487012 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1201 20:40:10.360040  487012 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1201 20:40:10.712360  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:10.823395  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:11.213496  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:11.313395  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:11.712630  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:11.793861  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:12.212636  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:12.294330  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:12.713017  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:12.794407  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:13.218485  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:13.293713  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:13.713430  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:13.793922  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:14.212507  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:14.294062  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:14.713322  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:14.794305  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:15.300407  487012 kapi.go:107] duration metric: took 1m29.092234874s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1201 20:40:15.307486  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:15.793366  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:16.294693  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:16.793502  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:17.293110  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:17.793926  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:18.293057  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:18.793887  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:19.293902  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:19.793954  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:20.293488  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:20.794012  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:21.293780  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:21.793416  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:22.292915  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:22.793816  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:23.293707  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:23.793780  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:24.294016  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:24.793985  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:25.293459  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:25.794242  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:26.294558  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:26.793895  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:27.294111  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:27.794182  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:28.293456  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:28.793566  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:29.293495  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:29.794276  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:30.293901  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:30.793029  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:31.294625  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:31.793459  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:32.294190  487012 kapi.go:107] duration metric: took 1m46.504286182s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1201 20:40:32.297187  487012 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, ingress-dns, registry-creds, cloud-spanner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1201 20:40:32.299879  487012 addons.go:530] duration metric: took 1m53.526352603s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner nvidia-device-plugin ingress-dns registry-creds cloud-spanner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1201 20:40:32.299934  487012 start.go:247] waiting for cluster config update ...
	I1201 20:40:32.299957  487012 start.go:256] writing updated cluster config ...
	I1201 20:40:32.300287  487012 ssh_runner.go:195] Run: rm -f paused
	I1201 20:40:32.306714  487012 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:40:32.310377  487012 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q75zt" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.315556  487012 pod_ready.go:94] pod "coredns-66bc5c9577-q75zt" is "Ready"
	I1201 20:40:32.315588  487012 pod_ready.go:86] duration metric: took 5.181812ms for pod "coredns-66bc5c9577-q75zt" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.317913  487012 pod_ready.go:83] waiting for pod "etcd-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.325018  487012 pod_ready.go:94] pod "etcd-addons-947185" is "Ready"
	I1201 20:40:32.325050  487012 pod_ready.go:86] duration metric: took 7.110036ms for pod "etcd-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.333683  487012 pod_ready.go:83] waiting for pod "kube-apiserver-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.339917  487012 pod_ready.go:94] pod "kube-apiserver-addons-947185" is "Ready"
	I1201 20:40:32.339946  487012 pod_ready.go:86] duration metric: took 6.235543ms for pod "kube-apiserver-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.342632  487012 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.710944  487012 pod_ready.go:94] pod "kube-controller-manager-addons-947185" is "Ready"
	I1201 20:40:32.710971  487012 pod_ready.go:86] duration metric: took 368.313701ms for pod "kube-controller-manager-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.911879  487012 pod_ready.go:83] waiting for pod "kube-proxy-6l2m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:33.310422  487012 pod_ready.go:94] pod "kube-proxy-6l2m9" is "Ready"
	I1201 20:40:33.310452  487012 pod_ready.go:86] duration metric: took 398.547244ms for pod "kube-proxy-6l2m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:33.510929  487012 pod_ready.go:83] waiting for pod "kube-scheduler-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:33.911457  487012 pod_ready.go:94] pod "kube-scheduler-addons-947185" is "Ready"
	I1201 20:40:33.911486  487012 pod_ready.go:86] duration metric: took 400.481834ms for pod "kube-scheduler-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:33.911502  487012 pod_ready.go:40] duration metric: took 1.604749625s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:40:33.984828  487012 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1201 20:40:33.990027  487012 out.go:179] * Done! kubectl is now configured to use "addons-947185" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 20:43:48 addons-947185 crio[829]: time="2025-12-01T20:43:48.427278235Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=7f05d05e-2b34-4297-9ce7-ae8dca0b992c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:43:48 addons-947185 crio[829]: time="2025-12-01T20:43:48.429737499Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=c20d6a02-27ba-428d-a6f3-75875b38ca0d name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:43:48 addons-947185 crio[829]: time="2025-12-01T20:43:48.438408032Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-qc52j/registry-creds" id=3bf70176-a848-4c56-a6b0-7686c272b0c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:43:48 addons-947185 crio[829]: time="2025-12-01T20:43:48.439621677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:43:48 addons-947185 crio[829]: time="2025-12-01T20:43:48.456588211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:43:48 addons-947185 crio[829]: time="2025-12-01T20:43:48.457279922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:43:48 addons-947185 crio[829]: time="2025-12-01T20:43:48.482674411Z" level=info msg="Created container b822a67f64d876921ffffcd71b9b62689f04335f668c17c99430f4a78a3cdc9e: kube-system/registry-creds-764b6fb674-qc52j/registry-creds" id=3bf70176-a848-4c56-a6b0-7686c272b0c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:43:48 addons-947185 crio[829]: time="2025-12-01T20:43:48.485931919Z" level=info msg="Starting container: b822a67f64d876921ffffcd71b9b62689f04335f668c17c99430f4a78a3cdc9e" id=ee1442ad-8e0c-4cda-bf02-36075d93863d name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:43:48 addons-947185 conmon[7151]: conmon b822a67f64d876921fff <ninfo>: container 7153 exited with status 1
	Dec 01 20:43:48 addons-947185 crio[829]: time="2025-12-01T20:43:48.499580811Z" level=info msg="Started container" PID=7153 containerID=b822a67f64d876921ffffcd71b9b62689f04335f668c17c99430f4a78a3cdc9e description=kube-system/registry-creds-764b6fb674-qc52j/registry-creds id=ee1442ad-8e0c-4cda-bf02-36075d93863d name=/runtime.v1.RuntimeService/StartContainer sandboxID=d61a8b1469857af910ae816bf52f3dbf3167e266510c8904e7ca6ce63b23aebe
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.000528238Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=5ac8c529-d7d0-49b9-b4af-1f139a8d5d38 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.001165427Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e29e1d8c-8201-4e69-a633-5f301b826946 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.004297046Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e708e475-3c53-4b6c-a6ef-a0b6f2399c67 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.015373479Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-thtfb/hello-world-app" id=412fc0fa-2dfc-46c8-9e5e-f4be9fbcc592 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.015626372Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.024732736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.025119813Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/89fa396df9d6ed95fd8e686174e105ac10ca504ecd6bf48ab0d492bb80cd415e/merged/etc/passwd: no such file or directory"
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.025149827Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/89fa396df9d6ed95fd8e686174e105ac10ca504ecd6bf48ab0d492bb80cd415e/merged/etc/group: no such file or directory"
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.025575229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.048124221Z" level=info msg="Created container fb700a8c0223186f386a8d21e27979209722fe27694677856161c90af06d83e4: default/hello-world-app-5d498dc89-thtfb/hello-world-app" id=412fc0fa-2dfc-46c8-9e5e-f4be9fbcc592 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.05048336Z" level=info msg="Starting container: fb700a8c0223186f386a8d21e27979209722fe27694677856161c90af06d83e4" id=df6eab51-3cfd-41a6-8037-d09b89e23986 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.056719827Z" level=info msg="Started container" PID=7188 containerID=fb700a8c0223186f386a8d21e27979209722fe27694677856161c90af06d83e4 description=default/hello-world-app-5d498dc89-thtfb/hello-world-app id=df6eab51-3cfd-41a6-8037-d09b89e23986 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c99ccf730a3b620cbec121c420ddc37132a7824475292c30c57468d9be758d4
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.089570276Z" level=info msg="Removing container: 8477f1a38a702d4dc23ca9d419c55d4dc4b82d70142c1acc2ccc8b64059e633b" id=eb1c0f75-52a7-4f49-9cc4-073cc94ad936 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.103521899Z" level=info msg="Error loading conmon cgroup of container 8477f1a38a702d4dc23ca9d419c55d4dc4b82d70142c1acc2ccc8b64059e633b: cgroup deleted" id=eb1c0f75-52a7-4f49-9cc4-073cc94ad936 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:43:49 addons-947185 crio[829]: time="2025-12-01T20:43:49.115710743Z" level=info msg="Removed container 8477f1a38a702d4dc23ca9d419c55d4dc4b82d70142c1acc2ccc8b64059e633b: kube-system/registry-creds-764b6fb674-qc52j/registry-creds" id=eb1c0f75-52a7-4f49-9cc4-073cc94ad936 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	fb700a8c02231       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   6c99ccf730a3b       hello-world-app-5d498dc89-thtfb            default
	b822a67f64d87       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             1 second ago        Exited              registry-creds                           2                   d61a8b1469857       registry-creds-764b6fb674-qc52j            kube-system
	bc89471fbefc3       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago       Running             nginx                                    0                   a9b91a2992790       nginx                                      default
	80f560bb03773       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago       Running             busybox                                  0                   5a629a96091e7       busybox                                    default
	ef1b80bc96780       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago       Running             controller                               0                   ca537e4f03d39       ingress-nginx-controller-6c8bf45fb-tsqsw   ingress-nginx
	1a93315e27a95       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago       Exited              patch                                    3                   2e47de487632d       ingress-nginx-admission-patch-9mt5s        ingress-nginx
	7b62bd9d48709       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   d79d06056f1a2       csi-hostpathplugin-z8frr                   kube-system
	29c40b113be21       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   d79d06056f1a2       csi-hostpathplugin-z8frr                   kube-system
	e850c7755eb34       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   d79d06056f1a2       csi-hostpathplugin-z8frr                   kube-system
	efdf78311fe62       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   d79d06056f1a2       csi-hostpathplugin-z8frr                   kube-system
	2f2f59c27da37       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   4b4bce51fd535       gcp-auth-78565c9fb4-8vxpt                  gcp-auth
	232ae6c256a29       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   d79d06056f1a2       csi-hostpathplugin-z8frr                   kube-system
	d7875f3fcd966       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago       Running             gadget                                   0                   34ab0fff3a418       gadget-ph2zs                               gadget
	7a49eee06f360       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   4771b76c817dd       csi-hostpath-attacher-0                    kube-system
	2e43602ecbbd5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago       Running             local-path-provisioner                   0                   8f94bb99d6a7d       local-path-provisioner-648f6765c9-zt7fb    local-path-storage
	6a99991f8f5f7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago       Exited              create                                   0                   511f190d7ccc8       ingress-nginx-admission-create-pqg7d       ingress-nginx
	dea3b2ad8e17b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   0813e4130fba3       registry-proxy-scbhm                       kube-system
	295353c277ab2       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago       Running             csi-resizer                              0                   b6854d2b98d35       csi-hostpath-resizer-0                     kube-system
	b322f4a7417f9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   639e05362b663       snapshot-controller-7d9fbc56b8-h8r4s       kube-system
	dfa409f637400       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago       Running             csi-external-health-monitor-controller   0                   d79d06056f1a2       csi-hostpathplugin-z8frr                   kube-system
	ed486d82e1fa5       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago       Running             yakd                                     0                   62abca8f433d7       yakd-dashboard-5ff678cb9-vsq2m             yakd-dashboard
	7e60a35a8eba6       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   47991145d8e11       snapshot-controller-7d9fbc56b8-r8wng       kube-system
	3f46bcefd8d83       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   12ac5a7a095bc       nvidia-device-plugin-daemonset-mm775       kube-system
	1f7b4e9296524       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               4 minutes ago       Running             cloud-spanner-emulator                   0                   fda14bda2fab1       cloud-spanner-emulator-5bdddb765-t5czj     default
	58cd25bffc816       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago       Running             registry                                 0                   54033b7e1fe3b       registry-6b586f9694-m876b                  kube-system
	361dc81943838       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago       Running             metrics-server                           0                   14c5179911c2d       metrics-server-85b7d694d7-wwwt5            kube-system
	9f83ec5f5e551       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago       Running             minikube-ingress-dns                     0                   19f4a67b0d738       kube-ingress-dns-minikube                  kube-system
	2355b41e2da84       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   2dd71ab78b4d8       coredns-66bc5c9577-q75zt                   kube-system
	1837dcaf5caf8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   a4b7aa48089db       storage-provisioner                        kube-system
	95ac3b0ee00d6       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             5 minutes ago       Running             kube-proxy                               0                   49dc58cf3ba51       kube-proxy-6l2m9                           kube-system
	53fd34a71ad26       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago       Running             kindnet-cni                              0                   09f87990807d8       kindnet-5m5nn                              kube-system
	913315b106bf8       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago       Running             kube-scheduler                           0                   625ee107f8cbb       kube-scheduler-addons-947185               kube-system
	d708a60b3df7c       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago       Running             kube-apiserver                           0                   ac9bd49cc73a8       kube-apiserver-addons-947185               kube-system
	2608ffb63d779       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago       Running             kube-controller-manager                  0                   26573ac936339       kube-controller-manager-addons-947185      kube-system
	969d358cb0a5c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago       Running             etcd                                     0                   3da43ff1b8ccc       etcd-addons-947185                         kube-system
	
	
	==> coredns [2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440] <==
	[INFO] 10.244.0.8:47227 - 20173 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003448547s
	[INFO] 10.244.0.8:47227 - 59811 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000125962s
	[INFO] 10.244.0.8:47227 - 26837 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000091494s
	[INFO] 10.244.0.8:46888 - 44968 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000181584s
	[INFO] 10.244.0.8:46888 - 45198 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000251375s
	[INFO] 10.244.0.8:58537 - 13386 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000135595s
	[INFO] 10.244.0.8:58537 - 13181 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128883s
	[INFO] 10.244.0.8:54084 - 17590 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096474s
	[INFO] 10.244.0.8:54084 - 17401 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068995s
	[INFO] 10.244.0.8:49262 - 16235 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001267979s
	[INFO] 10.244.0.8:49262 - 16440 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001488028s
	[INFO] 10.244.0.8:60073 - 29478 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000200094s
	[INFO] 10.244.0.8:60073 - 29330 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146763s
	[INFO] 10.244.0.20:37945 - 17659 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000175299s
	[INFO] 10.244.0.20:48842 - 21247 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213149s
	[INFO] 10.244.0.20:36111 - 21050 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161761s
	[INFO] 10.244.0.20:39336 - 33750 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000069619s
	[INFO] 10.244.0.20:53133 - 8518 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144514s
	[INFO] 10.244.0.20:33714 - 49444 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077102s
	[INFO] 10.244.0.20:56722 - 46483 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002660862s
	[INFO] 10.244.0.20:50843 - 41739 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001726308s
	[INFO] 10.244.0.20:34480 - 40894 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000717147s
	[INFO] 10.244.0.20:35932 - 33776 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001442391s
	[INFO] 10.244.0.24:59064 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000218646s
	[INFO] 10.244.0.24:48716 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000481032s
	
	
	==> describe nodes <==
	Name:               addons-947185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-947185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=addons-947185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_38_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-947185
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-947185"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:38:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-947185
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:43:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:43:40 +0000   Mon, 01 Dec 2025 20:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:43:40 +0000   Mon, 01 Dec 2025 20:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:43:40 +0000   Mon, 01 Dec 2025 20:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:43:40 +0000   Mon, 01 Dec 2025 20:39:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-947185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                904801a4-17c3-4e2b-995e-dac559f4bfd9
	  Boot ID:                    06dea43b-2aa1-4726-8bb8-0a198189349a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  default                     cloud-spanner-emulator-5bdddb765-t5czj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  default                     hello-world-app-5d498dc89-thtfb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-ph2zs                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  gcp-auth                    gcp-auth-78565c9fb4-8vxpt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-tsqsw    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m5s
	  kube-system                 coredns-66bc5c9577-q75zt                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m12s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 csi-hostpathplugin-z8frr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 etcd-addons-947185                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m17s
	  kube-system                 kindnet-5m5nn                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m12s
	  kube-system                 kube-apiserver-addons-947185                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-controller-manager-addons-947185       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-6l2m9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-scheduler-addons-947185                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 metrics-server-85b7d694d7-wwwt5             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m6s
	  kube-system                 nvidia-device-plugin-daemonset-mm775        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 registry-6b586f9694-m876b                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 registry-creds-764b6fb674-qc52j             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 registry-proxy-scbhm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 snapshot-controller-7d9fbc56b8-h8r4s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-r8wng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  local-path-storage          local-path-provisioner-648f6765c9-zt7fb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vsq2m              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m9s                   kube-proxy       
	  Warning  CgroupV1                 5m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node addons-947185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node addons-947185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m24s (x8 over 5m24s)  kubelet          Node addons-947185 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m17s                  kubelet          Node addons-947185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m17s                  kubelet          Node addons-947185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m17s                  kubelet          Node addons-947185 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m13s                  node-controller  Node addons-947185 event: Registered Node addons-947185 in Controller
	  Normal   NodeReady                4m30s                  kubelet          Node addons-947185 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d] <==
	{"level":"warn","ts":"2025-12-01T20:38:29.468318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.487970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.504352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.529163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.543969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.556086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.579628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.593713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.612364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.631941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.648646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.663992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.687285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.712266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.738167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.761286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.782982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.799553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.899387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:46.350072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:46.355371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:39:07.723648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:39:07.738290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:39:07.771314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:39:07.786542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53026","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [2f2f59c27da378ee11ed11fde00c3a0effbb0dd7a3c3b6badb5ec864517fb892] <==
	2025/12/01 20:40:09 GCP Auth Webhook started!
	2025/12/01 20:40:34 Ready to marshal response ...
	2025/12/01 20:40:34 Ready to write response ...
	2025/12/01 20:40:34 Ready to marshal response ...
	2025/12/01 20:40:34 Ready to write response ...
	2025/12/01 20:40:34 Ready to marshal response ...
	2025/12/01 20:40:34 Ready to write response ...
	2025/12/01 20:40:52 Ready to marshal response ...
	2025/12/01 20:40:52 Ready to write response ...
	2025/12/01 20:40:54 Ready to marshal response ...
	2025/12/01 20:40:54 Ready to write response ...
	2025/12/01 20:41:08 Ready to marshal response ...
	2025/12/01 20:41:08 Ready to write response ...
	2025/12/01 20:41:08 Ready to marshal response ...
	2025/12/01 20:41:08 Ready to write response ...
	2025/12/01 20:41:12 Ready to marshal response ...
	2025/12/01 20:41:12 Ready to write response ...
	2025/12/01 20:41:16 Ready to marshal response ...
	2025/12/01 20:41:16 Ready to write response ...
	2025/12/01 20:41:28 Ready to marshal response ...
	2025/12/01 20:41:28 Ready to write response ...
	2025/12/01 20:43:47 Ready to marshal response ...
	2025/12/01 20:43:47 Ready to write response ...
	
	
	==> kernel <==
	 20:43:50 up  2:26,  0 user,  load average: 0.76, 1.29, 1.98
	Linux addons-947185 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac] <==
	I1201 20:41:49.723307       1 main.go:301] handling current node
	I1201 20:41:59.727294       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:41:59.727401       1 main.go:301] handling current node
	I1201 20:42:09.720580       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:42:09.720614       1 main.go:301] handling current node
	I1201 20:42:19.727334       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:42:19.727455       1 main.go:301] handling current node
	I1201 20:42:29.721629       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:42:29.721676       1 main.go:301] handling current node
	I1201 20:42:39.720579       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:42:39.720693       1 main.go:301] handling current node
	I1201 20:42:49.721009       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:42:49.721145       1 main.go:301] handling current node
	I1201 20:42:59.721631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:42:59.721677       1 main.go:301] handling current node
	I1201 20:43:09.720789       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:43:09.720825       1 main.go:301] handling current node
	I1201 20:43:19.721251       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:43:19.721288       1 main.go:301] handling current node
	I1201 20:43:29.729761       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:43:29.729796       1 main.go:301] handling current node
	I1201 20:43:39.724505       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:43:39.724608       1 main.go:301] handling current node
	I1201 20:43:49.720814       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:43:49.720851       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef] <==
	W1201 20:39:07.771002       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1201 20:39:07.786531       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1201 20:39:20.277252       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.190.154:443: connect: connection refused
	E1201 20:39:20.277299       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.190.154:443: connect: connection refused" logger="UnhandledError"
	W1201 20:39:20.277938       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.190.154:443: connect: connection refused
	E1201 20:39:20.277975       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.190.154:443: connect: connection refused" logger="UnhandledError"
	W1201 20:39:20.381602       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.190.154:443: connect: connection refused
	E1201 20:39:20.381643       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.190.154:443: connect: connection refused" logger="UnhandledError"
	E1201 20:39:32.916197       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.104:443: connect: connection refused" logger="UnhandledError"
	W1201 20:39:32.916754       1 handler_proxy.go:99] no RequestInfo found in the context
	E1201 20:39:32.916914       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1201 20:39:32.918819       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.104:443: connect: connection refused" logger="UnhandledError"
	E1201 20:39:32.952286       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.104:443: connect: connection refused" logger="UnhandledError"
	E1201 20:39:32.983500       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.104:443: connect: connection refused" logger="UnhandledError"
	I1201 20:39:33.124405       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1201 20:40:43.058412       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39032: use of closed network connection
	E1201 20:40:43.309193       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39050: use of closed network connection
	E1201 20:40:43.447773       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39076: use of closed network connection
	I1201 20:41:03.576658       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1201 20:41:28.609431       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1201 20:41:28.903470       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.252.201"}
	I1201 20:43:48.221698       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.137.43"}
	
	
	==> kube-controller-manager [2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be] <==
	I1201 20:38:37.751999       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1201 20:38:37.758790       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 20:38:37.758895       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1201 20:38:37.759865       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1201 20:38:37.759887       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1201 20:38:37.759897       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 20:38:37.759908       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 20:38:37.760589       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 20:38:37.760616       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 20:38:37.760628       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 20:38:37.760635       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 20:38:37.773384       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1201 20:38:37.775560       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-947185" podCIDRs=["10.244.0.0/24"]
	I1201 20:38:37.798013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:38:37.798060       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 20:38:37.798069       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1201 20:38:44.420067       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1201 20:39:07.716479       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1201 20:39:07.716643       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1201 20:39:07.716701       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1201 20:39:07.760058       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1201 20:39:07.764406       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1201 20:39:07.817688       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:39:07.864865       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:39:22.759013       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9] <==
	I1201 20:38:40.841865       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:38:40.924826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:38:41.025848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:38:41.025888       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1201 20:38:41.025971       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:38:41.069191       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:38:41.069249       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:38:41.076560       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:38:41.077083       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:38:41.077100       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:38:41.084882       1 config.go:200] "Starting service config controller"
	I1201 20:38:41.084902       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:38:41.084929       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:38:41.084934       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:38:41.084947       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:38:41.084951       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:38:41.085599       1 config.go:309] "Starting node config controller"
	I1201 20:38:41.085607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:38:41.085613       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:38:41.185484       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:38:41.185522       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:38:41.185587       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d] <==
	E1201 20:38:30.760170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 20:38:30.760309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:38:30.760426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:38:30.760550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 20:38:30.760692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 20:38:30.760809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:38:30.761015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:38:30.761159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:38:30.761533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 20:38:30.761616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 20:38:30.761639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 20:38:30.761687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 20:38:30.761707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 20:38:31.594141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 20:38:31.599479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 20:38:31.607464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 20:38:31.639108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:38:31.699851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 20:38:31.773358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 20:38:31.837362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 20:38:31.909235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 20:38:31.933096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:38:32.022716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:38:32.265635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1201 20:38:35.143597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:42:18 addons-947185 kubelet[1281]: I1201 20:42:18.426689    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-mm775" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:42:27 addons-947185 kubelet[1281]: I1201 20:42:27.429310    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-scbhm" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:18 addons-947185 kubelet[1281]: I1201 20:43:18.427091    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-m876b" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:29 addons-947185 kubelet[1281]: I1201 20:43:29.426498    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-mm775" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:30 addons-947185 kubelet[1281]: I1201 20:43:30.527884    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-qc52j" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:30 addons-947185 kubelet[1281]: W1201 20:43:30.561358    1281 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/crio-d61a8b1469857af910ae816bf52f3dbf3167e266510c8904e7ca6ce63b23aebe WatchSource:0}: Error finding container d61a8b1469857af910ae816bf52f3dbf3167e266510c8904e7ca6ce63b23aebe: Status 404 returned error can't find the container with id d61a8b1469857af910ae816bf52f3dbf3167e266510c8904e7ca6ce63b23aebe
	Dec 01 20:43:31 addons-947185 kubelet[1281]: I1201 20:43:31.984010    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-qc52j" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:31 addons-947185 kubelet[1281]: I1201 20:43:31.984068    1281 scope.go:117] "RemoveContainer" containerID="51ce211fb1379111eb17746727d5134c3cc07d41c615e4723af6148999a04f46"
	Dec 01 20:43:32 addons-947185 kubelet[1281]: I1201 20:43:32.990319    1281 scope.go:117] "RemoveContainer" containerID="51ce211fb1379111eb17746727d5134c3cc07d41c615e4723af6148999a04f46"
	Dec 01 20:43:32 addons-947185 kubelet[1281]: I1201 20:43:32.990666    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-qc52j" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:32 addons-947185 kubelet[1281]: I1201 20:43:32.990728    1281 scope.go:117] "RemoveContainer" containerID="8477f1a38a702d4dc23ca9d419c55d4dc4b82d70142c1acc2ccc8b64059e633b"
	Dec 01 20:43:32 addons-947185 kubelet[1281]: E1201 20:43:32.990899    1281 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-qc52j_kube-system(178c8099-fe59-4a00-9d1a-a0a80a1b7d7e)\"" pod="kube-system/registry-creds-764b6fb674-qc52j" podUID="178c8099-fe59-4a00-9d1a-a0a80a1b7d7e"
	Dec 01 20:43:33 addons-947185 kubelet[1281]: I1201 20:43:33.996195    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-qc52j" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:33 addons-947185 kubelet[1281]: I1201 20:43:33.997269    1281 scope.go:117] "RemoveContainer" containerID="8477f1a38a702d4dc23ca9d419c55d4dc4b82d70142c1acc2ccc8b64059e633b"
	Dec 01 20:43:33 addons-947185 kubelet[1281]: E1201 20:43:33.997680    1281 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-qc52j_kube-system(178c8099-fe59-4a00-9d1a-a0a80a1b7d7e)\"" pod="kube-system/registry-creds-764b6fb674-qc52j" podUID="178c8099-fe59-4a00-9d1a-a0a80a1b7d7e"
	Dec 01 20:43:43 addons-947185 kubelet[1281]: I1201 20:43:43.428004    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-scbhm" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:48 addons-947185 kubelet[1281]: I1201 20:43:48.193221    1281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/615b2da2-ea60-49ab-9b3d-ec565867fcfc-gcp-creds\") pod \"hello-world-app-5d498dc89-thtfb\" (UID: \"615b2da2-ea60-49ab-9b3d-ec565867fcfc\") " pod="default/hello-world-app-5d498dc89-thtfb"
	Dec 01 20:43:48 addons-947185 kubelet[1281]: I1201 20:43:48.193363    1281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxw6\" (UniqueName: \"kubernetes.io/projected/615b2da2-ea60-49ab-9b3d-ec565867fcfc-kube-api-access-rxxw6\") pod \"hello-world-app-5d498dc89-thtfb\" (UID: \"615b2da2-ea60-49ab-9b3d-ec565867fcfc\") " pod="default/hello-world-app-5d498dc89-thtfb"
	Dec 01 20:43:48 addons-947185 kubelet[1281]: W1201 20:43:48.394443    1281 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/crio-6c99ccf730a3b620cbec121c420ddc37132a7824475292c30c57468d9be758d4 WatchSource:0}: Error finding container 6c99ccf730a3b620cbec121c420ddc37132a7824475292c30c57468d9be758d4: Status 404 returned error can't find the container with id 6c99ccf730a3b620cbec121c420ddc37132a7824475292c30c57468d9be758d4
	Dec 01 20:43:48 addons-947185 kubelet[1281]: I1201 20:43:48.426116    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-qc52j" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:48 addons-947185 kubelet[1281]: I1201 20:43:48.426187    1281 scope.go:117] "RemoveContainer" containerID="8477f1a38a702d4dc23ca9d419c55d4dc4b82d70142c1acc2ccc8b64059e633b"
	Dec 01 20:43:49 addons-947185 kubelet[1281]: I1201 20:43:49.076454    1281 scope.go:117] "RemoveContainer" containerID="8477f1a38a702d4dc23ca9d419c55d4dc4b82d70142c1acc2ccc8b64059e633b"
	Dec 01 20:43:49 addons-947185 kubelet[1281]: I1201 20:43:49.076879    1281 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-qc52j" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 20:43:49 addons-947185 kubelet[1281]: I1201 20:43:49.076951    1281 scope.go:117] "RemoveContainer" containerID="b822a67f64d876921ffffcd71b9b62689f04335f668c17c99430f4a78a3cdc9e"
	Dec 01 20:43:49 addons-947185 kubelet[1281]: E1201 20:43:49.077258    1281 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-qc52j_kube-system(178c8099-fe59-4a00-9d1a-a0a80a1b7d7e)\"" pod="kube-system/registry-creds-764b6fb674-qc52j" podUID="178c8099-fe59-4a00-9d1a-a0a80a1b7d7e"
	
	
	==> storage-provisioner [1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06] <==
	W1201 20:43:24.658888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:26.662707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:26.669569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:28.672613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:28.676955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:30.679955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:30.687538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:32.691838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:32.696385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:34.699335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:34.704079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:36.707384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:36.714457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:38.718225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:38.723942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:40.728895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:40.733907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:42.737842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:42.742888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:44.745898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:44.752759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:46.755626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:46.760702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:48.770885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:43:48.778758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-947185 -n addons-947185
helpers_test.go:269: (dbg) Run:  kubectl --context addons-947185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-pqg7d ingress-nginx-admission-patch-9mt5s
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-947185 describe pod ingress-nginx-admission-create-pqg7d ingress-nginx-admission-patch-9mt5s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-947185 describe pod ingress-nginx-admission-create-pqg7d ingress-nginx-admission-patch-9mt5s: exit status 1 (105.163363ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pqg7d" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9mt5s" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-947185 describe pod ingress-nginx-admission-create-pqg7d ingress-nginx-admission-patch-9mt5s: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (267.716494ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:43:51.648891  496626 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:43:51.649801  496626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:43:51.649819  496626 out.go:374] Setting ErrFile to fd 2...
	I1201 20:43:51.649826  496626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:43:51.650157  496626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:43:51.650502  496626 mustload.go:66] Loading cluster: addons-947185
	I1201 20:43:51.650982  496626 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:43:51.651003  496626 addons.go:622] checking whether the cluster is paused
	I1201 20:43:51.651187  496626 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:43:51.651207  496626 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:43:51.651871  496626 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:43:51.670692  496626 ssh_runner.go:195] Run: systemctl --version
	I1201 20:43:51.670765  496626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:43:51.693481  496626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:43:51.798445  496626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:43:51.798538  496626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:43:51.829472  496626 cri.go:89] found id: "b822a67f64d876921ffffcd71b9b62689f04335f668c17c99430f4a78a3cdc9e"
	I1201 20:43:51.829548  496626 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:43:51.829570  496626 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:43:51.829595  496626 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:43:51.829632  496626 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:43:51.829655  496626 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:43:51.829679  496626 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:43:51.829716  496626 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:43:51.829738  496626 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:43:51.829775  496626 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:43:51.829814  496626 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:43:51.829834  496626 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:43:51.829857  496626 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:43:51.829889  496626 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:43:51.829915  496626 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:43:51.829939  496626 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:43:51.829984  496626 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:43:51.830015  496626 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:43:51.830035  496626 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:43:51.830066  496626 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:43:51.830089  496626 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:43:51.830117  496626 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:43:51.830147  496626 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:43:51.830166  496626 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:43:51.830186  496626 cri.go:89] found id: ""
	I1201 20:43:51.830268  496626 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:43:51.846026  496626 out.go:203] 
	W1201 20:43:51.849238  496626 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:43:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:43:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:43:51.849271  496626 out.go:285] * 
	* 
	W1201 20:43:51.856020  496626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:43:51.858922  496626 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable ingress --alsologtostderr -v=1: exit status 11 (274.540649ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:43:51.912017  496668 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:43:51.912762  496668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:43:51.912805  496668 out.go:374] Setting ErrFile to fd 2...
	I1201 20:43:51.912830  496668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:43:51.913710  496668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:43:51.914099  496668 mustload.go:66] Loading cluster: addons-947185
	I1201 20:43:51.914551  496668 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:43:51.914575  496668 addons.go:622] checking whether the cluster is paused
	I1201 20:43:51.914730  496668 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:43:51.914756  496668 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:43:51.915442  496668 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:43:51.934010  496668 ssh_runner.go:195] Run: systemctl --version
	I1201 20:43:51.934084  496668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:43:51.953495  496668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:43:52.066691  496668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:43:52.066884  496668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:43:52.102417  496668 cri.go:89] found id: "b822a67f64d876921ffffcd71b9b62689f04335f668c17c99430f4a78a3cdc9e"
	I1201 20:43:52.102442  496668 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:43:52.102447  496668 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:43:52.102452  496668 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:43:52.102456  496668 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:43:52.102460  496668 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:43:52.102463  496668 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:43:52.102466  496668 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:43:52.102470  496668 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:43:52.102477  496668 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:43:52.102480  496668 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:43:52.102484  496668 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:43:52.102487  496668 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:43:52.102491  496668 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:43:52.102495  496668 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:43:52.102508  496668 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:43:52.102516  496668 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:43:52.102521  496668 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:43:52.102525  496668 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:43:52.102529  496668 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:43:52.102534  496668 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:43:52.102541  496668 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:43:52.102546  496668 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:43:52.102549  496668 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:43:52.102553  496668 cri.go:89] found id: ""
	I1201 20:43:52.102620  496668 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:43:52.121388  496668 out.go:203] 
	W1201 20:43:52.124319  496668 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:43:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:43:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:43:52.124344  496668 out.go:285] * 
	* 
	W1201 20:43:52.131105  496668 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:43:52.133987  496668 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.85s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-ph2zs" [f11b0492-1e28-4041-989e-659bfa981700] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.012022046s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (274.675947ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:28.062939  494853 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:28.063756  494853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:28.063801  494853 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:28.063828  494853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:28.064128  494853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:28.064494  494853 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:28.064926  494853 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:28.064972  494853 addons.go:622] checking whether the cluster is paused
	I1201 20:41:28.065110  494853 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:28.065145  494853 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:28.065691  494853 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:28.084499  494853 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:28.084566  494853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:28.103326  494853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:28.211233  494853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:28.211330  494853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:28.249053  494853 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:28.249076  494853 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:28.249080  494853 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:28.249084  494853 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:28.249088  494853 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:28.249091  494853 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:28.249095  494853 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:28.249098  494853 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:28.249101  494853 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:28.249109  494853 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:28.249113  494853 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:28.249117  494853 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:28.249120  494853 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:28.249123  494853 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:28.249127  494853 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:28.249134  494853 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:28.249137  494853 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:28.249142  494853 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:28.249146  494853 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:28.249150  494853 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:28.249154  494853 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:28.249158  494853 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:28.249162  494853 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:28.249170  494853 cri.go:89] found id: ""
	I1201 20:41:28.249219  494853 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:28.268079  494853 out.go:203] 
	W1201 20:41:28.270980  494853 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:28.271020  494853 out.go:285] * 
	* 
	W1201 20:41:28.277568  494853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:28.281308  494853 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.565615ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005923034s
addons_test.go:463: (dbg) Run:  kubectl --context addons-947185 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (292.888632ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:29.058330  494993 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:29.059218  494993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:29.059238  494993 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:29.059245  494993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:29.059525  494993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:29.059836  494993 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:29.060212  494993 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:29.060229  494993 addons.go:622] checking whether the cluster is paused
	I1201 20:41:29.060338  494993 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:29.060352  494993 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:29.060899  494993 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:29.085103  494993 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:29.085167  494993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:29.104348  494993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:29.214352  494993 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:29.214498  494993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:29.262208  494993 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:29.262239  494993 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:29.262246  494993 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:29.262250  494993 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:29.262253  494993 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:29.262282  494993 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:29.262290  494993 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:29.262294  494993 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:29.262298  494993 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:29.262308  494993 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:29.262315  494993 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:29.262319  494993 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:29.262322  494993 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:29.262326  494993 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:29.262329  494993 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:29.262337  494993 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:29.262344  494993 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:29.262349  494993 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:29.262352  494993 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:29.262355  494993 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:29.262361  494993 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:29.262364  494993 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:29.262366  494993 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:29.262370  494993 cri.go:89] found id: ""
	I1201 20:41:29.262424  494993 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:29.282873  494993 out.go:203] 
	W1201 20:41:29.285969  494993 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:29.285993  494993 out.go:285] * 
	* 
	W1201 20:41:29.292744  494993 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:29.295714  494993 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (30.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1201 20:40:50.003103  486002 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1201 20:40:50.009314  486002 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1201 20:40:50.009346  486002 kapi.go:107] duration metric: took 6.261035ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.274656ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-947185 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-947185 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [76418187-79ce-4df4-9e89-30316506525c] Pending
helpers_test.go:352: "task-pv-pod" [76418187-79ce-4df4-9e89-30316506525c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [76418187-79ce-4df4-9e89-30316506525c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004366495s
addons_test.go:572: (dbg) Run:  kubectl --context addons-947185 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-947185 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-947185 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-947185 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-947185 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-947185 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-947185 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b] Pending
helpers_test.go:352: "task-pv-pod-restore" [8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003795353s
addons_test.go:614: (dbg) Run:  kubectl --context addons-947185 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-947185 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-947185 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (294.502745ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:19.982239  494190 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:19.983188  494190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:19.983236  494190 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:19.983261  494190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:19.983579  494190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:19.983926  494190 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:19.984380  494190 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:19.984420  494190 addons.go:622] checking whether the cluster is paused
	I1201 20:41:19.984571  494190 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:19.984606  494190 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:19.985186  494190 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:20.003419  494190 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:20.003500  494190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:20.032957  494190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:20.138697  494190 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:20.138865  494190 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:20.181634  494190 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:20.181666  494190 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:20.181683  494190 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:20.181688  494190 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:20.181710  494190 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:20.181732  494190 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:20.181736  494190 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:20.181739  494190 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:20.181743  494190 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:20.181750  494190 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:20.181759  494190 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:20.181764  494190 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:20.181768  494190 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:20.181772  494190 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:20.181800  494190 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:20.181807  494190 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:20.181829  494190 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:20.181833  494190 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:20.181837  494190 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:20.181841  494190 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:20.181852  494190 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:20.181875  494190 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:20.181887  494190 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:20.181891  494190 cri.go:89] found id: ""
	I1201 20:41:20.181962  494190 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:20.200603  494190 out.go:203] 
	W1201 20:41:20.203641  494190 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:20.203676  494190 out.go:285] * 
	* 
	W1201 20:41:20.210483  494190 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:20.213450  494190 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (289.934492ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:20.283663  494240 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:20.284551  494240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:20.284606  494240 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:20.284628  494240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:20.284927  494240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:20.285274  494240 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:20.285763  494240 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:20.285808  494240 addons.go:622] checking whether the cluster is paused
	I1201 20:41:20.285969  494240 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:20.286002  494240 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:20.286606  494240 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:20.309194  494240 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:20.309252  494240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:20.330203  494240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:20.438281  494240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:20.438375  494240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:20.471943  494240 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:20.471972  494240 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:20.471978  494240 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:20.471982  494240 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:20.471986  494240 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:20.471989  494240 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:20.471993  494240 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:20.471997  494240 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:20.472000  494240 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:20.472006  494240 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:20.472010  494240 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:20.472013  494240 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:20.472016  494240 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:20.472019  494240 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:20.472022  494240 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:20.472031  494240 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:20.472035  494240 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:20.472039  494240 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:20.472046  494240 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:20.472049  494240 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:20.472054  494240 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:20.472057  494240 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:20.472060  494240 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:20.472063  494240 cri.go:89] found id: ""
	I1201 20:41:20.472119  494240 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:20.491389  494240 out.go:203] 
	W1201 20:41:20.494388  494240 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:20.494415  494240 out.go:285] * 
	* 
	W1201 20:41:20.501174  494240 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:20.504164  494240 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (30.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-947185 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-947185 --alsologtostderr -v=1: exit status 11 (272.371951ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:20.564253  494284 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:20.565138  494284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:20.565185  494284 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:20.565207  494284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:20.565595  494284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:20.566466  494284 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:20.567186  494284 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:20.567213  494284 addons.go:622] checking whether the cluster is paused
	I1201 20:41:20.567393  494284 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:20.567429  494284 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:20.569198  494284 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:20.588767  494284 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:20.588829  494284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:20.608035  494284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:20.713917  494284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:20.714000  494284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:20.748498  494284 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:20.748529  494284 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:20.748535  494284 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:20.748556  494284 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:20.748565  494284 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:20.748570  494284 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:20.748582  494284 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:20.748585  494284 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:20.748588  494284 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:20.748603  494284 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:20.748611  494284 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:20.748614  494284 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:20.748617  494284 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:20.748620  494284 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:20.748626  494284 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:20.748631  494284 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:20.748637  494284 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:20.748642  494284 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:20.748645  494284 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:20.748648  494284 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:20.748653  494284 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:20.748657  494284 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:20.748659  494284 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:20.748662  494284 cri.go:89] found id: ""
	I1201 20:41:20.748722  494284 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:20.764517  494284 out.go:203] 
	W1201 20:41:20.767319  494284 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:20.767346  494284 out.go:285] * 
	* 
	W1201 20:41:20.773952  494284 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:20.776882  494284 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-947185 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-947185
helpers_test.go:243: (dbg) docker inspect addons-947185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9",
	        "Created": "2025-12-01T20:38:06.56379414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:38:06.632612263Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/hosts",
	        "LogPath": "/var/lib/docker/containers/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9/1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9-json.log",
	        "Name": "/addons-947185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-947185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-947185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e76f49106608dd7ce6e43e1d3af9a19c21e25311ae9d3cf51c18fc94ebdecb9",
	                "LowerDir": "/var/lib/docker/overlay2/3dc9a77c3516cdaa521570b418a8a7608cf48ac01accb0d6dff10e3cf7bdc79a-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3dc9a77c3516cdaa521570b418a8a7608cf48ac01accb0d6dff10e3cf7bdc79a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3dc9a77c3516cdaa521570b418a8a7608cf48ac01accb0d6dff10e3cf7bdc79a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3dc9a77c3516cdaa521570b418a8a7608cf48ac01accb0d6dff10e3cf7bdc79a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-947185",
	                "Source": "/var/lib/docker/volumes/addons-947185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-947185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-947185",
	                "name.minikube.sigs.k8s.io": "addons-947185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "129e0dff262722110da0027a2bfe8f668c0ba8048f6336541d3e5766568353a5",
	            "SandboxKey": "/var/run/docker/netns/129e0dff2627",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-947185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:89:88:7d:b0:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dfb6f6fd26bfa307a0d061351931e04913ad57ce59ce0f7157642befa78f7126",
	                    "EndpointID": "3db396a9fc8c3d1334e7d458d576fdcc66e89939e3a97c9f93373fc684af7947",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-947185",
	                        "1e76f4910660"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-947185 -n addons-947185
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-947185 logs -n 25: (1.699977487s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-000800                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-000800   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ start   │ -o=json --download-only -p download-only-193191 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-193191   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ delete  │ -p download-only-193191                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-193191   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ delete  │ -p download-only-111439                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-111439   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ delete  │ -p download-only-000800                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-000800   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ delete  │ -p download-only-193191                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-193191   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ start   │ --download-only -p download-docker-074980 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-074980 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ delete  │ -p download-docker-074980                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-074980 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ start   │ --download-only -p binary-mirror-986055 --alsologtostderr --binary-mirror http://127.0.0.1:41593 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-986055   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ delete  │ -p binary-mirror-986055                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-986055   │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ addons  │ enable dashboard -p addons-947185                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ addons  │ disable dashboard -p addons-947185                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ start   │ -p addons-947185 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:40 UTC │
	│ addons  │ addons-947185 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:40 UTC │                     │
	│ addons  │ addons-947185 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:40 UTC │                     │
	│ addons  │ addons-947185 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:40 UTC │                     │
	│ ip      │ addons-947185 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │ 01 Dec 25 20:41 UTC │
	│ addons  │ addons-947185 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ ssh     │ addons-947185 ssh cat /opt/local-path-provisioner/pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │ 01 Dec 25 20:41 UTC │
	│ addons  │ addons-947185 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ addons-947185 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	│ addons  │ enable headlamp -p addons-947185 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-947185          │ jenkins │ v1.37.0 │ 01 Dec 25 20:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:37:59
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:37:59.032536  487012 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:37:59.032739  487012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:59.032773  487012 out.go:374] Setting ErrFile to fd 2...
	I1201 20:37:59.032796  487012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:59.033098  487012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:37:59.033637  487012 out.go:368] Setting JSON to false
	I1201 20:37:59.034537  487012 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8428,"bootTime":1764613051,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 20:37:59.034646  487012 start.go:143] virtualization:  
	I1201 20:37:59.038483  487012 out.go:179] * [addons-947185] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 20:37:59.041855  487012 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:37:59.041911  487012 notify.go:221] Checking for updates...
	I1201 20:37:59.048094  487012 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:37:59.051164  487012 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:37:59.054238  487012 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 20:37:59.057307  487012 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 20:37:59.060284  487012 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:37:59.063596  487012 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:37:59.088630  487012 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 20:37:59.088777  487012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:59.164563  487012 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-01 20:37:59.155025879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:59.164682  487012 docker.go:319] overlay module found
	I1201 20:37:59.168009  487012 out.go:179] * Using the docker driver based on user configuration
	I1201 20:37:59.171038  487012 start.go:309] selected driver: docker
	I1201 20:37:59.171064  487012 start.go:927] validating driver "docker" against <nil>
	I1201 20:37:59.171079  487012 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:37:59.171998  487012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:59.229266  487012 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-01 20:37:59.219876105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:59.229440  487012 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 20:37:59.229691  487012 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:37:59.232655  487012 out.go:179] * Using Docker driver with root privileges
	I1201 20:37:59.235606  487012 cni.go:84] Creating CNI manager for ""
	I1201 20:37:59.235690  487012 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:37:59.235706  487012 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 20:37:59.235782  487012 start.go:353] cluster config:
	{Name:addons-947185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1201 20:37:59.238989  487012 out.go:179] * Starting "addons-947185" primary control-plane node in "addons-947185" cluster
	I1201 20:37:59.241778  487012 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:37:59.244825  487012 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:37:59.247583  487012 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:37:59.247635  487012 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1201 20:37:59.247646  487012 cache.go:65] Caching tarball of preloaded images
	I1201 20:37:59.247751  487012 preload.go:238] Found /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1201 20:37:59.247766  487012 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:37:59.248127  487012 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/config.json ...
	I1201 20:37:59.248152  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/config.json: {Name:mk58b9e23075d2dc4424a61d1dac09e84405f00d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:37:59.248324  487012 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:37:59.269868  487012 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:37:59.269889  487012 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:37:59.269911  487012 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:37:59.269945  487012 start.go:360] acquireMachinesLock for addons-947185: {Name:mkc87eceafa2be40884bb90866de997784cee8a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:37:59.270055  487012 start.go:364] duration metric: took 94.866µs to acquireMachinesLock for "addons-947185"
	I1201 20:37:59.270082  487012 start.go:93] Provisioning new machine with config: &{Name:addons-947185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:37:59.270152  487012 start.go:125] createHost starting for "" (driver="docker")
	I1201 20:37:59.273672  487012 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1201 20:37:59.274019  487012 start.go:159] libmachine.API.Create for "addons-947185" (driver="docker")
	I1201 20:37:59.274069  487012 client.go:173] LocalClient.Create starting
	I1201 20:37:59.274213  487012 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem
	I1201 20:37:59.795077  487012 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem
	I1201 20:38:00.020385  487012 cli_runner.go:164] Run: docker network inspect addons-947185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1201 20:38:00.056043  487012 cli_runner.go:211] docker network inspect addons-947185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1201 20:38:00.056159  487012 network_create.go:284] running [docker network inspect addons-947185] to gather additional debugging logs...
	I1201 20:38:00.056180  487012 cli_runner.go:164] Run: docker network inspect addons-947185
	W1201 20:38:00.079950  487012 cli_runner.go:211] docker network inspect addons-947185 returned with exit code 1
	I1201 20:38:00.079983  487012 network_create.go:287] error running [docker network inspect addons-947185]: docker network inspect addons-947185: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-947185 not found
	I1201 20:38:00.080000  487012 network_create.go:289] output of [docker network inspect addons-947185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-947185 not found
	
	** /stderr **
	I1201 20:38:00.080111  487012 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:38:00.122250  487012 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001af0060}
	I1201 20:38:00.122304  487012 network_create.go:124] attempt to create docker network addons-947185 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1201 20:38:00.122370  487012 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-947185 addons-947185
	I1201 20:38:00.267581  487012 network_create.go:108] docker network addons-947185 192.168.49.0/24 created
	I1201 20:38:00.267623  487012 kic.go:121] calculated static IP "192.168.49.2" for the "addons-947185" container
	I1201 20:38:00.267751  487012 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1201 20:38:00.321128  487012 cli_runner.go:164] Run: docker volume create addons-947185 --label name.minikube.sigs.k8s.io=addons-947185 --label created_by.minikube.sigs.k8s.io=true
	I1201 20:38:00.355468  487012 oci.go:103] Successfully created a docker volume addons-947185
	I1201 20:38:00.355571  487012 cli_runner.go:164] Run: docker run --rm --name addons-947185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-947185 --entrypoint /usr/bin/test -v addons-947185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1201 20:38:02.356265  487012 cli_runner.go:217] Completed: docker run --rm --name addons-947185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-947185 --entrypoint /usr/bin/test -v addons-947185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (2.000652563s)
	I1201 20:38:02.356301  487012 oci.go:107] Successfully prepared a docker volume addons-947185
	I1201 20:38:02.356351  487012 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:38:02.356369  487012 kic.go:194] Starting extracting preloaded images to volume ...
	I1201 20:38:02.356452  487012 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-947185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1201 20:38:06.465652  487012 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-947185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.109156789s)
	I1201 20:38:06.465706  487012 kic.go:203] duration metric: took 4.109333294s to extract preloaded images to volume ...
	W1201 20:38:06.465864  487012 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1201 20:38:06.466006  487012 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1201 20:38:06.544261  487012 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-947185 --name addons-947185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-947185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-947185 --network addons-947185 --ip 192.168.49.2 --volume addons-947185:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1201 20:38:06.866643  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Running}}
	I1201 20:38:06.891394  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:06.919940  487012 cli_runner.go:164] Run: docker exec addons-947185 stat /var/lib/dpkg/alternatives/iptables
	I1201 20:38:06.982784  487012 oci.go:144] the created container "addons-947185" has a running status.
	I1201 20:38:06.982826  487012 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa...
	I1201 20:38:07.296985  487012 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1201 20:38:07.326206  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:07.351731  487012 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1201 20:38:07.351759  487012 kic_runner.go:114] Args: [docker exec --privileged addons-947185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1201 20:38:07.420675  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:07.440392  487012 machine.go:94] provisionDockerMachine start ...
	I1201 20:38:07.440508  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:07.460304  487012 main.go:143] libmachine: Using SSH client type: native
	I1201 20:38:07.460655  487012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1201 20:38:07.460673  487012 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:38:07.461347  487012 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1201 20:38:10.615211  487012 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-947185
	
	I1201 20:38:10.615240  487012 ubuntu.go:182] provisioning hostname "addons-947185"
	I1201 20:38:10.615315  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:10.635497  487012 main.go:143] libmachine: Using SSH client type: native
	I1201 20:38:10.635869  487012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1201 20:38:10.635891  487012 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-947185 && echo "addons-947185" | sudo tee /etc/hostname
	I1201 20:38:10.797148  487012 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-947185
	
	I1201 20:38:10.797294  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:10.815791  487012 main.go:143] libmachine: Using SSH client type: native
	I1201 20:38:10.816114  487012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1201 20:38:10.816135  487012 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-947185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-947185/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-947185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:38:10.967886  487012 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:38:10.967916  487012 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 20:38:10.967952  487012 ubuntu.go:190] setting up certificates
	I1201 20:38:10.967964  487012 provision.go:84] configureAuth start
	I1201 20:38:10.968048  487012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-947185
	I1201 20:38:10.987490  487012 provision.go:143] copyHostCerts
	I1201 20:38:10.987583  487012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 20:38:10.987721  487012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 20:38:10.987795  487012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 20:38:10.987865  487012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.addons-947185 san=[127.0.0.1 192.168.49.2 addons-947185 localhost minikube]
	I1201 20:38:11.349453  487012 provision.go:177] copyRemoteCerts
	I1201 20:38:11.349792  487012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:38:11.349842  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:11.368649  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:11.476349  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:38:11.498734  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 20:38:11.518709  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 20:38:11.538353  487012 provision.go:87] duration metric: took 570.360369ms to configureAuth
	I1201 20:38:11.538384  487012 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:38:11.538586  487012 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:38:11.538706  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:11.557923  487012 main.go:143] libmachine: Using SSH client type: native
	I1201 20:38:11.558272  487012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1201 20:38:11.558296  487012 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:38:12.028990  487012 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:38:12.029020  487012 machine.go:97] duration metric: took 4.588598345s to provisionDockerMachine
	I1201 20:38:12.029034  487012 client.go:176] duration metric: took 12.754955953s to LocalClient.Create
	I1201 20:38:12.029049  487012 start.go:167] duration metric: took 12.755033547s to libmachine.API.Create "addons-947185"
	I1201 20:38:12.029057  487012 start.go:293] postStartSetup for "addons-947185" (driver="docker")
	I1201 20:38:12.029070  487012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:38:12.029151  487012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:38:12.029202  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:12.048622  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:12.155635  487012 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:38:12.159243  487012 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:38:12.159272  487012 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:38:12.159286  487012 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 20:38:12.159361  487012 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 20:38:12.159396  487012 start.go:296] duration metric: took 130.331161ms for postStartSetup
	I1201 20:38:12.159723  487012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-947185
	I1201 20:38:12.177533  487012 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/config.json ...
	I1201 20:38:12.177834  487012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:38:12.177877  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:12.196999  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:12.300395  487012 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:38:12.305559  487012 start.go:128] duration metric: took 13.035391847s to createHost
	I1201 20:38:12.305583  487012 start.go:83] releasing machines lock for "addons-947185", held for 13.035518918s
	I1201 20:38:12.305669  487012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-947185
	I1201 20:38:12.324022  487012 ssh_runner.go:195] Run: cat /version.json
	I1201 20:38:12.324052  487012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:38:12.324086  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:12.324122  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:12.344755  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:12.346188  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:12.550810  487012 ssh_runner.go:195] Run: systemctl --version
	I1201 20:38:12.557835  487012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:38:12.604750  487012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:38:12.609711  487012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:38:12.609801  487012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:38:12.640367  487012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1201 20:38:12.640435  487012 start.go:496] detecting cgroup driver to use...
	I1201 20:38:12.640483  487012 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 20:38:12.640545  487012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:38:12.658833  487012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:38:12.673086  487012 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:38:12.673170  487012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:38:12.692207  487012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:38:12.711886  487012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:38:12.837966  487012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:38:13.001471  487012 docker.go:234] disabling docker service ...
	I1201 20:38:13.001643  487012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:38:13.028303  487012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:38:13.044294  487012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:38:13.163711  487012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:38:13.285594  487012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:38:13.299482  487012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:38:13.313674  487012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:38:13.313741  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.322544  487012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:38:13.322617  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.331623  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.341401  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.350386  487012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:38:13.359329  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.368209  487012 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.382426  487012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:38:13.391364  487012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:38:13.399240  487012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:38:13.406674  487012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:38:13.523096  487012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:38:13.715610  487012 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:38:13.715758  487012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:38:13.720097  487012 start.go:564] Will wait 60s for crictl version
	I1201 20:38:13.720212  487012 ssh_runner.go:195] Run: which crictl
	I1201 20:38:13.724628  487012 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:38:13.762132  487012 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:38:13.762248  487012 ssh_runner.go:195] Run: crio --version
	I1201 20:38:13.793729  487012 ssh_runner.go:195] Run: crio --version
	I1201 20:38:13.828489  487012 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:38:13.831439  487012 cli_runner.go:164] Run: docker network inspect addons-947185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:38:13.849480  487012 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 20:38:13.853899  487012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:38:13.865445  487012 kubeadm.go:884] updating cluster {Name:addons-947185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:38:13.865586  487012 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:38:13.865661  487012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:38:13.908880  487012 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:38:13.908912  487012 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:38:13.908978  487012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:38:13.938524  487012 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:38:13.938557  487012 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:38:13.938568  487012 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1201 20:38:13.938686  487012 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-947185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:38:13.938789  487012 ssh_runner.go:195] Run: crio config
	I1201 20:38:14.005872  487012 cni.go:84] Creating CNI manager for ""
	I1201 20:38:14.005904  487012 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:38:14.005926  487012 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:38:14.005963  487012 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-947185 NodeName:addons-947185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:38:14.006121  487012 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-947185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:38:14.006216  487012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:38:14.019189  487012 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:38:14.019268  487012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:38:14.028231  487012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1201 20:38:14.043240  487012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:38:14.057553  487012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1201 20:38:14.071510  487012 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:38:14.075477  487012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:38:14.085926  487012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:38:14.212078  487012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:38:14.230213  487012 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185 for IP: 192.168.49.2
	I1201 20:38:14.230239  487012 certs.go:195] generating shared ca certs ...
	I1201 20:38:14.230258  487012 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:14.230403  487012 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 20:38:14.800855  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt ...
	I1201 20:38:14.800891  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt: {Name:mk9fe7877c71b72180f6a27d4f902ec2ec04e60b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:14.801096  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key ...
	I1201 20:38:14.801111  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key: {Name:mk899a0cc35d8e3bbf101a27d9c94b28eb4fb86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:14.801201  487012 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 20:38:15.074083  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt ...
	I1201 20:38:15.074118  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt: {Name:mk99934bf2ab6aeba7185d55ef0520915ada0c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.074321  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key ...
	I1201 20:38:15.074335  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key: {Name:mk9662579547bce463933ce154561132d5c1876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.074427  487012 certs.go:257] generating profile certs ...
	I1201 20:38:15.074489  487012 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.key
	I1201 20:38:15.074507  487012 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt with IP's: []
	I1201 20:38:15.427314  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt ...
	I1201 20:38:15.427349  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: {Name:mkef1fab619944026e5c7d4ee81bc92ca8d90c44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.427547  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.key ...
	I1201 20:38:15.427562  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.key: {Name:mka5a6f220819854db9e95d3a642773ca88b1d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.427655  487012 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key.b13e0a5a
	I1201 20:38:15.427679  487012 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt.b13e0a5a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1201 20:38:15.805369  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt.b13e0a5a ...
	I1201 20:38:15.805402  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt.b13e0a5a: {Name:mk7b8677589e1b0f0cdcce61d3108c968878641d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.805597  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key.b13e0a5a ...
	I1201 20:38:15.805622  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key.b13e0a5a: {Name:mkc2d0134106a2ede954dfff9ec2d5e3ac522f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.805734  487012 certs.go:382] copying /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt.b13e0a5a -> /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt
	I1201 20:38:15.805822  487012 certs.go:386] copying /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key.b13e0a5a -> /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key
	I1201 20:38:15.805877  487012 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.key
	I1201 20:38:15.805898  487012 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.crt with IP's: []
	I1201 20:38:15.959077  487012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.crt ...
	I1201 20:38:15.959113  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.crt: {Name:mk4a47b4c9f51d8fb32f63b7ed11d2b04d887b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.959313  487012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.key ...
	I1201 20:38:15.959326  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.key: {Name:mk68e6758ea083b22239fcdf82e2a70a6d38c3c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:15.959550  487012 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:38:15.959600  487012 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:38:15.959639  487012 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:38:15.959671  487012 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 20:38:15.960326  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:38:15.980671  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:38:16.001042  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:38:16.026386  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 20:38:16.047096  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 20:38:16.066948  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:38:16.085863  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:38:16.105096  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:38:16.125404  487012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:38:16.144692  487012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:38:16.158897  487012 ssh_runner.go:195] Run: openssl version
	I1201 20:38:16.165528  487012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:38:16.174622  487012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:38:16.178731  487012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:38:16.178813  487012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:38:16.222043  487012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:38:16.230916  487012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:38:16.234913  487012 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:38:16.234975  487012 kubeadm.go:401] StartCluster: {Name:addons-947185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-947185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:38:16.235058  487012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:38:16.235118  487012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:38:16.267866  487012 cri.go:89] found id: ""
	I1201 20:38:16.267956  487012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:38:16.276455  487012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:38:16.284895  487012 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 20:38:16.284973  487012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:38:16.293378  487012 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:38:16.293403  487012 kubeadm.go:158] found existing configuration files:
	
	I1201 20:38:16.293462  487012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 20:38:16.301876  487012 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:38:16.301977  487012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:38:16.310590  487012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 20:38:16.320944  487012 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:38:16.321018  487012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:38:16.329419  487012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 20:38:16.337926  487012 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:38:16.338003  487012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:38:16.346135  487012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 20:38:16.354244  487012 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:38:16.354321  487012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:38:16.362535  487012 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 20:38:16.410121  487012 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 20:38:16.410519  487012 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:38:16.436356  487012 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 20:38:16.436433  487012 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 20:38:16.436475  487012 kubeadm.go:319] OS: Linux
	I1201 20:38:16.436525  487012 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 20:38:16.436579  487012 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 20:38:16.436634  487012 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 20:38:16.436708  487012 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 20:38:16.436761  487012 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 20:38:16.436813  487012 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 20:38:16.436863  487012 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 20:38:16.436914  487012 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 20:38:16.436965  487012 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 20:38:16.509998  487012 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:38:16.510119  487012 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:38:16.510222  487012 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:38:16.519480  487012 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 20:38:16.526032  487012 out.go:252]   - Generating certificates and keys ...
	I1201 20:38:16.526148  487012 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:38:16.526226  487012 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:38:16.767727  487012 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:38:17.814927  487012 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:38:18.239971  487012 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:38:18.753538  487012 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:38:19.249012  487012 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:38:19.249385  487012 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-947185 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 20:38:20.276815  487012 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:38:20.276969  487012 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-947185 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 20:38:21.123713  487012 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 20:38:21.344660  487012 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 20:38:22.338049  487012 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 20:38:22.338357  487012 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 20:38:22.673911  487012 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 20:38:23.114522  487012 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 20:38:24.256949  487012 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 20:38:24.540344  487012 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 20:38:24.980102  487012 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 20:38:24.981457  487012 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 20:38:24.985629  487012 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 20:38:24.989077  487012 out.go:252]   - Booting up control plane ...
	I1201 20:38:24.989190  487012 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 20:38:24.989270  487012 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 20:38:24.989955  487012 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 20:38:25.007804  487012 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 20:38:25.008196  487012 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 20:38:25.016918  487012 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 20:38:25.017247  487012 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 20:38:25.017294  487012 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 20:38:25.161354  487012 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 20:38:25.161477  487012 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 20:38:26.669814  487012 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.508117066s
	I1201 20:38:26.673746  487012 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 20:38:26.673977  487012 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1201 20:38:26.674076  487012 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 20:38:26.674158  487012 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 20:38:29.301212  487012 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.626385758s
	I1201 20:38:30.768029  487012 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.093588792s
	I1201 20:38:32.676282  487012 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001620812s
	I1201 20:38:32.717414  487012 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 20:38:32.733456  487012 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 20:38:32.753165  487012 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 20:38:32.753374  487012 kubeadm.go:319] [mark-control-plane] Marking the node addons-947185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 20:38:32.766123  487012 kubeadm.go:319] [bootstrap-token] Using token: v7wip5.2p795q6uvmytpget
	I1201 20:38:32.769051  487012 out.go:252]   - Configuring RBAC rules ...
	I1201 20:38:32.769174  487012 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 20:38:32.774620  487012 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 20:38:32.788469  487012 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 20:38:32.793892  487012 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 20:38:32.798826  487012 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 20:38:32.803900  487012 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 20:38:33.083783  487012 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 20:38:33.522675  487012 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 20:38:34.083697  487012 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 20:38:34.084866  487012 kubeadm.go:319] 
	I1201 20:38:34.084940  487012 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 20:38:34.084947  487012 kubeadm.go:319] 
	I1201 20:38:34.085023  487012 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 20:38:34.085028  487012 kubeadm.go:319] 
	I1201 20:38:34.085053  487012 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 20:38:34.085110  487012 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 20:38:34.085159  487012 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 20:38:34.085163  487012 kubeadm.go:319] 
	I1201 20:38:34.085216  487012 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 20:38:34.085221  487012 kubeadm.go:319] 
	I1201 20:38:34.085268  487012 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 20:38:34.085273  487012 kubeadm.go:319] 
	I1201 20:38:34.085323  487012 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 20:38:34.085396  487012 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 20:38:34.085463  487012 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 20:38:34.085468  487012 kubeadm.go:319] 
	I1201 20:38:34.085550  487012 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 20:38:34.085641  487012 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 20:38:34.085647  487012 kubeadm.go:319] 
	I1201 20:38:34.085727  487012 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token v7wip5.2p795q6uvmytpget \
	I1201 20:38:34.085827  487012 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ba416c8b9f9df321471bca98b9f543ca561a2f4cf5ae7c15e9cc221036e7ebbc \
	I1201 20:38:34.085848  487012 kubeadm.go:319] 	--control-plane 
	I1201 20:38:34.085853  487012 kubeadm.go:319] 
	I1201 20:38:34.085934  487012 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 20:38:34.085938  487012 kubeadm.go:319] 
	I1201 20:38:34.086030  487012 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token v7wip5.2p795q6uvmytpget \
	I1201 20:38:34.086141  487012 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ba416c8b9f9df321471bca98b9f543ca561a2f4cf5ae7c15e9cc221036e7ebbc 
	I1201 20:38:34.089144  487012 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1201 20:38:34.089382  487012 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 20:38:34.089490  487012 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 20:38:34.089511  487012 cni.go:84] Creating CNI manager for ""
	I1201 20:38:34.089521  487012 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:38:34.092802  487012 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 20:38:34.096042  487012 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 20:38:34.100679  487012 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1201 20:38:34.100707  487012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 20:38:34.115707  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 20:38:34.436805  487012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 20:38:34.436964  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:34.437047  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-947185 minikube.k8s.io/updated_at=2025_12_01T20_38_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=addons-947185 minikube.k8s.io/primary=true
	I1201 20:38:34.658322  487012 ops.go:34] apiserver oom_adj: -16
	I1201 20:38:34.658448  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:35.159027  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:35.658780  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:36.159075  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:36.658691  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:37.159118  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:37.659162  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:38.158788  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:38.659243  487012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:38:38.772336  487012 kubeadm.go:1114] duration metric: took 4.335422723s to wait for elevateKubeSystemPrivileges
	I1201 20:38:38.772363  487012 kubeadm.go:403] duration metric: took 22.537392775s to StartCluster
	I1201 20:38:38.772380  487012 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:38.772495  487012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:38:38.772935  487012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:38:38.773145  487012 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:38:38.773247  487012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1201 20:38:38.773485  487012 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:38:38.773515  487012 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1201 20:38:38.773584  487012 addons.go:70] Setting yakd=true in profile "addons-947185"
	I1201 20:38:38.773598  487012 addons.go:239] Setting addon yakd=true in "addons-947185"
	I1201 20:38:38.773624  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.774143  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.774719  487012 addons.go:70] Setting metrics-server=true in profile "addons-947185"
	I1201 20:38:38.774747  487012 addons.go:239] Setting addon metrics-server=true in "addons-947185"
	I1201 20:38:38.774760  487012 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-947185"
	I1201 20:38:38.774778  487012 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-947185"
	I1201 20:38:38.774781  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.774801  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.775254  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.775338  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.778447  487012 addons.go:70] Setting registry=true in profile "addons-947185"
	I1201 20:38:38.778590  487012 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-947185"
	I1201 20:38:38.778616  487012 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-947185"
	I1201 20:38:38.778657  487012 addons.go:239] Setting addon registry=true in "addons-947185"
	I1201 20:38:38.778682  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.778823  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.779002  487012 addons.go:70] Setting cloud-spanner=true in profile "addons-947185"
	I1201 20:38:38.779015  487012 addons.go:239] Setting addon cloud-spanner=true in "addons-947185"
	I1201 20:38:38.779033  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.779402  487012 addons.go:70] Setting registry-creds=true in profile "addons-947185"
	I1201 20:38:38.779427  487012 addons.go:239] Setting addon registry-creds=true in "addons-947185"
	I1201 20:38:38.779467  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.779722  487012 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-947185"
	I1201 20:38:38.779793  487012 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-947185"
	I1201 20:38:38.779814  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.779916  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.780658  487012 addons.go:70] Setting default-storageclass=true in profile "addons-947185"
	I1201 20:38:38.780687  487012 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-947185"
	I1201 20:38:38.782323  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.785392  487012 addons.go:70] Setting storage-provisioner=true in profile "addons-947185"
	I1201 20:38:38.785432  487012 addons.go:239] Setting addon storage-provisioner=true in "addons-947185"
	I1201 20:38:38.785477  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.786042  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.791711  487012 addons.go:70] Setting gcp-auth=true in profile "addons-947185"
	I1201 20:38:38.791763  487012 mustload.go:66] Loading cluster: addons-947185
	I1201 20:38:38.792005  487012 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:38:38.792335  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.803471  487012 addons.go:70] Setting ingress=true in profile "addons-947185"
	I1201 20:38:38.803510  487012 addons.go:239] Setting addon ingress=true in "addons-947185"
	I1201 20:38:38.803558  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.804059  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.817649  487012 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-947185"
	I1201 20:38:38.817699  487012 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-947185"
	I1201 20:38:38.818065  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.826867  487012 addons.go:70] Setting ingress-dns=true in profile "addons-947185"
	I1201 20:38:38.826908  487012 addons.go:239] Setting addon ingress-dns=true in "addons-947185"
	I1201 20:38:38.826951  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.827498  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.852356  487012 addons.go:70] Setting volcano=true in profile "addons-947185"
	I1201 20:38:38.852400  487012 addons.go:239] Setting addon volcano=true in "addons-947185"
	I1201 20:38:38.852439  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.852931  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.855504  487012 addons.go:70] Setting inspektor-gadget=true in profile "addons-947185"
	I1201 20:38:38.855549  487012 addons.go:239] Setting addon inspektor-gadget=true in "addons-947185"
	I1201 20:38:38.855590  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.858039  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.872866  487012 addons.go:70] Setting volumesnapshots=true in profile "addons-947185"
	I1201 20:38:38.872903  487012 addons.go:239] Setting addon volumesnapshots=true in "addons-947185"
	I1201 20:38:38.872938  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:38.873444  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.877476  487012 out.go:179] * Verifying Kubernetes components...
	I1201 20:38:38.881430  487012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:38:38.881996  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.902107  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.961632  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:38.984689  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:39.023304  487012 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1201 20:38:39.032302  487012 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1201 20:38:39.040118  487012 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1201 20:38:39.048282  487012 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:38:39.048368  487012 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 20:38:39.049700  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1201 20:38:39.049828  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.050096  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:39.048394  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1201 20:38:39.073885  487012 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1201 20:38:39.073998  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.092294  487012 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1201 20:38:39.098213  487012 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 20:38:39.098289  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1201 20:38:39.098390  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.101391  487012 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:38:39.101474  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:38:39.101570  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.113900  487012 addons.go:239] Setting addon default-storageclass=true in "addons-947185"
	I1201 20:38:39.113994  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:39.114504  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:39.123733  487012 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1201 20:38:39.124717  487012 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1201 20:38:39.124807  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	W1201 20:38:39.145027  487012 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1201 20:38:39.147609  487012 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-947185"
	I1201 20:38:39.147650  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:39.151200  487012 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 20:38:39.151431  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:39.152455  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1201 20:38:39.169858  487012 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1201 20:38:39.173554  487012 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1201 20:38:39.179310  487012 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 20:38:39.182998  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1201 20:38:39.183034  487012 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1201 20:38:39.183114  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.183465  487012 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 20:38:39.184428  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1201 20:38:39.184500  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.208646  487012 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 20:38:39.208671  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1201 20:38:39.208737  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.218507  487012 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1201 20:38:39.220350  487012 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1201 20:38:39.243250  487012 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1201 20:38:39.249670  487012 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1201 20:38:39.249754  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1201 20:38:39.249851  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.283666  487012 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1201 20:38:39.283703  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1201 20:38:39.283769  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.313160  487012 out.go:179]   - Using image docker.io/registry:3.0.0
	I1201 20:38:39.317191  487012 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1201 20:38:39.317217  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1201 20:38:39.317283  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.335366  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1201 20:38:39.335791  487012 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1201 20:38:39.347252  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1201 20:38:39.350652  487012 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 20:38:39.350679  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1201 20:38:39.350762  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.376613  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.386803  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.387427  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.388015  487012 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:38:39.388029  487012 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:38:39.388159  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.388427  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.389291  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1201 20:38:39.399453  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1201 20:38:39.403208  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1201 20:38:39.408906  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1201 20:38:39.417454  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.423810  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1201 20:38:39.427631  487012 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1201 20:38:39.432852  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1201 20:38:39.432937  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1201 20:38:39.433052  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.446489  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.449680  487012 out.go:179]   - Using image docker.io/busybox:stable
	I1201 20:38:39.458362  487012 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1201 20:38:39.462630  487012 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 20:38:39.462658  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1201 20:38:39.462731  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:39.468728  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.480354  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.520690  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.539525  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.540379  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.568163  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	W1201 20:38:39.575283  487012 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1201 20:38:39.575402  487012 retry.go:31] will retry after 164.846186ms: ssh: handshake failed: EOF
	I1201 20:38:39.592670  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.597195  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:39.600622  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	W1201 20:38:39.602088  487012 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1201 20:38:39.602116  487012 retry.go:31] will retry after 356.967056ms: ssh: handshake failed: EOF
	W1201 20:38:39.742247  487012 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1201 20:38:39.742325  487012 retry.go:31] will retry after 316.520771ms: ssh: handshake failed: EOF
	I1201 20:38:39.781714  487012 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.008441997s)
	I1201 20:38:39.781750  487012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:38:39.781997  487012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1201 20:38:39.895670  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 20:38:40.032755  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:38:40.060811  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 20:38:40.076260  487012 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1201 20:38:40.076283  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1201 20:38:40.097730  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 20:38:40.216995  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 20:38:40.227519  487012 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1201 20:38:40.227596  487012 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1201 20:38:40.244856  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 20:38:40.269725  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1201 20:38:40.269805  487012 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1201 20:38:40.286245  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:38:40.286965  487012 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1201 20:38:40.286987  487012 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1201 20:38:40.289857  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1201 20:38:40.321434  487012 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 20:38:40.321457  487012 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1201 20:38:40.337236  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1201 20:38:40.388288  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1201 20:38:40.388317  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1201 20:38:40.407466  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1201 20:38:40.407492  487012 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1201 20:38:40.470163  487012 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1201 20:38:40.470189  487012 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1201 20:38:40.479253  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1201 20:38:40.479280  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1201 20:38:40.494526  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 20:38:40.570934  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1201 20:38:40.570961  487012 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1201 20:38:40.652009  487012 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1201 20:38:40.652034  487012 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1201 20:38:40.668790  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1201 20:38:40.668818  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1201 20:38:40.827007  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 20:38:40.862620  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1201 20:38:40.862652  487012 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1201 20:38:40.879507  487012 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1201 20:38:40.879532  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1201 20:38:40.947374  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1201 20:38:40.947402  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1201 20:38:40.956733  487012 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1201 20:38:40.956762  487012 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1201 20:38:41.181069  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1201 20:38:41.247285  487012 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1201 20:38:41.247317  487012 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1201 20:38:41.284501  487012 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 20:38:41.284529  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1201 20:38:41.404154  487012 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1201 20:38:41.404186  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1201 20:38:41.516402  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 20:38:41.550983  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1201 20:38:41.551013  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1201 20:38:41.697137  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1201 20:38:41.876570  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1201 20:38:41.876598  487012 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1201 20:38:41.991387  487012 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.209361706s)
	I1201 20:38:41.991420  487012 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1201 20:38:41.992590  487012 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.210628692s)
	I1201 20:38:41.992708  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.097008775s)
	I1201 20:38:41.993576  487012 node_ready.go:35] waiting up to 6m0s for node "addons-947185" to be "Ready" ...
	I1201 20:38:42.248618  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1201 20:38:42.248646  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1201 20:38:42.440984  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1201 20:38:42.441009  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1201 20:38:42.499048  487012 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-947185" context rescaled to 1 replicas
	I1201 20:38:42.670216  487012 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 20:38:42.670243  487012 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1201 20:38:42.805199  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 20:38:43.948104  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.915310656s)
	I1201 20:38:43.948237  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.887400524s)
	I1201 20:38:43.948292  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.850486574s)
	W1201 20:38:44.041915  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:45.782974  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.565940277s)
	I1201 20:38:45.783007  487012 addons.go:495] Verifying addon ingress=true in "addons-947185"
	I1201 20:38:45.783301  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.538314703s)
	I1201 20:38:45.783347  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.497027947s)
	I1201 20:38:45.783465  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.493581779s)
	I1201 20:38:45.783535  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.446268287s)
	I1201 20:38:45.783611  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.289060754s)
	I1201 20:38:45.783624  487012 addons.go:495] Verifying addon metrics-server=true in "addons-947185"
	I1201 20:38:45.783666  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.956634923s)
	I1201 20:38:45.783836  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.602741846s)
	I1201 20:38:45.786110  487012 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-947185 service yakd-dashboard -n yakd-dashboard
	
	I1201 20:38:45.786276  487012 out.go:179] * Verifying ingress addon...
	I1201 20:38:45.789897  487012 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1201 20:38:45.813459  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.29700949s)
	W1201 20:38:45.813497  487012 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 20:38:45.813518  487012 retry.go:31] will retry after 129.835211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 20:38:45.813566  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.116397207s)
	I1201 20:38:45.813580  487012 addons.go:495] Verifying addon registry=true in "addons-947185"
	I1201 20:38:45.816975  487012 out.go:179] * Verifying registry addon...
	I1201 20:38:45.820118  487012 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1201 20:38:45.820186  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:45.821402  487012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1201 20:38:45.829885  487012 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1201 20:38:45.851240  487012 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1201 20:38:45.851270  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:45.944082  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 20:38:46.201395  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.396146794s)
	I1201 20:38:46.201432  487012 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-947185"
	I1201 20:38:46.204494  487012 out.go:179] * Verifying csi-hostpath-driver addon...
	I1201 20:38:46.208175  487012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1201 20:38:46.229303  487012 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1201 20:38:46.229328  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:46.293011  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:46.324851  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1201 20:38:46.496893  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:46.711908  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:46.756086  487012 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1201 20:38:46.756201  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:46.774316  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:46.815967  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:46.824605  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:46.891053  487012 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1201 20:38:46.905981  487012 addons.go:239] Setting addon gcp-auth=true in "addons-947185"
	I1201 20:38:46.906060  487012 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:38:46.906546  487012 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:38:46.924154  487012 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1201 20:38:46.924218  487012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:38:46.942079  487012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:38:47.212405  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:47.293669  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:47.324729  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:47.712190  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:47.792994  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:47.824585  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:48.211746  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:48.293600  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:48.325275  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1201 20:38:48.501001  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:48.712247  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:48.754994  487012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.81086588s)
	I1201 20:38:48.755051  487012 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.830869689s)
	I1201 20:38:48.758407  487012 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 20:38:48.761406  487012 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1201 20:38:48.764689  487012 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1201 20:38:48.764721  487012 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1201 20:38:48.778289  487012 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1201 20:38:48.778315  487012 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1201 20:38:48.793657  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:48.794887  487012 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 20:38:48.794949  487012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1201 20:38:48.808769  487012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 20:38:48.825127  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:49.212143  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:49.303051  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:49.338486  487012 addons.go:495] Verifying addon gcp-auth=true in "addons-947185"
	I1201 20:38:49.340245  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:49.341873  487012 out.go:179] * Verifying gcp-auth addon...
	I1201 20:38:49.345670  487012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1201 20:38:49.430250  487012 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1201 20:38:49.430272  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:49.711409  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:49.793487  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:49.825499  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:49.849282  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:50.211260  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:50.293344  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:50.325229  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:50.348921  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:50.711628  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:50.793883  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:50.825574  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:50.849504  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:50.996861  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:51.211811  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:51.293961  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:51.324636  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:51.349395  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:51.712270  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:51.793177  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:51.824868  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:51.849512  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:52.211972  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:52.294509  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:52.324388  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:52.349519  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:52.711312  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:52.793472  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:52.825464  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:52.849448  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:53.212006  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:53.292885  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:53.325019  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:53.348648  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:53.496398  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:53.711863  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:53.792863  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:53.825479  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:53.849522  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:54.211688  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:54.293728  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:54.324863  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:54.348708  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:54.711727  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:54.793844  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:54.824171  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:54.849309  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:55.211431  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:55.293631  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:55.324742  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:55.349470  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:55.711800  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:55.793159  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:55.824965  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:55.848676  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:55.996545  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:56.211460  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:56.293288  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:56.325147  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:56.349024  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:56.712582  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:56.793844  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:56.824867  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:56.848910  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:57.211970  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:57.293468  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:57.325383  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:57.349194  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:57.710952  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:57.794076  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:57.825196  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:57.849845  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:57.996885  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:38:58.212073  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:58.293070  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:58.324973  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:58.349048  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:58.712013  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:58.794190  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:58.825206  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:58.848865  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:59.211935  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:59.293060  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:59.325026  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:59.349330  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:38:59.711862  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:38:59.794228  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:38:59.825018  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:38:59.848922  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:38:59.997713  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:00.213876  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:00.315949  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:00.330622  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:00.360562  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:00.712488  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:00.793440  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:00.824581  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:00.851104  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:01.212618  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:01.313800  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:01.324603  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:01.349958  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:01.712052  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:01.793356  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:01.825247  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:01.848950  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:02.211740  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:02.293944  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:02.324850  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:02.348877  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:02.496674  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:02.713035  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:02.793182  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:02.825046  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:02.848811  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:03.211831  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:03.292824  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:03.324898  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:03.348547  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:03.712336  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:03.793292  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:03.825568  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:03.849503  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:04.212262  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:04.296564  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:04.324739  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:04.349562  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:04.711658  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:04.793807  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:04.824926  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:04.848988  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:04.997066  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:05.211184  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:05.292981  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:05.324805  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:05.349741  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:05.711638  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:05.793523  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:05.825430  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:05.849512  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:06.211412  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:06.293255  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:06.325421  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:06.349395  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:06.711954  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:06.793196  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:06.824883  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:06.848830  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:07.211892  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:07.293028  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:07.325027  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:07.348891  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:07.496633  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:07.712237  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:07.800800  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:07.824829  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:07.849462  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:08.211639  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:08.294206  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:08.325359  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:08.349385  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:08.711967  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:08.794200  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:08.825828  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:08.849720  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:09.211319  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:09.294092  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:09.324649  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:09.348657  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:09.711929  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:09.793789  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:09.824700  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:09.849542  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:09.997966  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:10.211113  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:10.293170  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:10.325024  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:10.348920  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:10.711772  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:10.793721  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:10.824515  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:10.849260  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:11.211258  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:11.293851  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:11.324563  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:11.349465  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:11.711309  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:11.793712  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:11.824782  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:11.848796  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:11.998508  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:12.212207  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:12.293251  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:12.324892  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:12.348851  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:12.712426  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:12.793392  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:12.824246  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:12.849103  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:13.215102  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:13.292910  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:13.324891  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:13.348887  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:13.711550  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:13.794939  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:13.832891  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:13.853398  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:14.212315  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:14.293642  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:14.324754  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:14.348723  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:14.496912  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:14.712748  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:14.794185  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:14.825173  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:14.849400  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:15.211447  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:15.293401  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:15.324406  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:15.349315  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:15.711482  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:15.793575  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:15.824298  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:15.849268  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:16.211239  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:16.293325  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:16.325147  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:16.349040  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:16.712193  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:16.793287  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:16.825167  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:16.849420  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:16.996481  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:17.211611  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:17.293936  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:17.325067  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:17.348750  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:17.712015  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:17.793439  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:17.824655  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:17.848836  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:18.211732  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:18.293693  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:18.324346  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:18.349285  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:18.712130  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:18.793627  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:18.824525  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:18.849557  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1201 20:39:18.997824  487012 node_ready.go:57] node "addons-947185" has "Ready":"False" status (will retry)
	I1201 20:39:19.211431  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:19.293426  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:19.324363  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:19.349628  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:19.711839  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:19.793641  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:19.824318  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:19.849648  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:20.218540  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:20.357102  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:20.362341  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:20.366082  487012 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1201 20:39:20.366106  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:20.501169  487012 node_ready.go:49] node "addons-947185" is "Ready"
	I1201 20:39:20.501200  487012 node_ready.go:38] duration metric: took 38.507586694s for node "addons-947185" to be "Ready" ...
	I1201 20:39:20.501215  487012 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:39:20.501299  487012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:39:20.521846  487012 api_server.go:72] duration metric: took 41.748664512s to wait for apiserver process to appear ...
	I1201 20:39:20.521874  487012 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:39:20.521897  487012 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1201 20:39:20.561317  487012 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1201 20:39:20.562595  487012 api_server.go:141] control plane version: v1.34.2
	I1201 20:39:20.562627  487012 api_server.go:131] duration metric: took 40.744794ms to wait for apiserver health ...
	I1201 20:39:20.562653  487012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:39:20.621706  487012 system_pods.go:59] 19 kube-system pods found
	I1201 20:39:20.621748  487012 system_pods.go:61] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:20.621756  487012 system_pods.go:61] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending
	I1201 20:39:20.621801  487012 system_pods.go:61] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending
	I1201 20:39:20.621807  487012 system_pods.go:61] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending
	I1201 20:39:20.621820  487012 system_pods.go:61] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:20.621824  487012 system_pods.go:61] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:20.621828  487012 system_pods.go:61] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:20.621832  487012 system_pods.go:61] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:20.621842  487012 system_pods.go:61] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending
	I1201 20:39:20.621860  487012 system_pods.go:61] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:20.621869  487012 system_pods.go:61] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:20.621873  487012 system_pods.go:61] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending
	I1201 20:39:20.621889  487012 system_pods.go:61] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending
	I1201 20:39:20.621900  487012 system_pods.go:61] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending
	I1201 20:39:20.621906  487012 system_pods.go:61] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:20.621910  487012 system_pods.go:61] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending
	I1201 20:39:20.621923  487012 system_pods.go:61] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending
	I1201 20:39:20.621927  487012 system_pods.go:61] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending
	I1201 20:39:20.621931  487012 system_pods.go:61] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending
	I1201 20:39:20.621937  487012 system_pods.go:74] duration metric: took 59.272542ms to wait for pod list to return data ...
	I1201 20:39:20.621945  487012 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:39:20.632112  487012 default_sa.go:45] found service account: "default"
	I1201 20:39:20.632139  487012 default_sa.go:55] duration metric: took 10.186391ms for default service account to be created ...
	I1201 20:39:20.632150  487012 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:39:20.656756  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:20.656796  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:20.656803  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending
	I1201 20:39:20.656808  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending
	I1201 20:39:20.656812  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending
	I1201 20:39:20.656816  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:20.656821  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:20.656825  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:20.656829  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:20.656859  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending
	I1201 20:39:20.656865  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:20.656877  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:20.656881  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending
	I1201 20:39:20.656885  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending
	I1201 20:39:20.656896  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending
	I1201 20:39:20.656903  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:20.656907  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending
	I1201 20:39:20.656912  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending
	I1201 20:39:20.656941  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending
	I1201 20:39:20.656952  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending
	I1201 20:39:20.656977  487012 retry.go:31] will retry after 283.580229ms: missing components: kube-dns
	I1201 20:39:20.747698  487012 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1201 20:39:20.747727  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:20.799585  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:20.827535  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:20.849618  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:20.949284  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:20.949324  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:20.949364  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 20:39:20.949379  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 20:39:20.949385  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending
	I1201 20:39:20.949395  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:20.949401  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:20.949405  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:20.949409  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:20.949433  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 20:39:20.949444  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:20.949450  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:20.949456  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 20:39:20.949465  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending
	I1201 20:39:20.949473  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 20:39:20.949483  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:20.949488  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending
	I1201 20:39:20.949496  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:20.949524  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:20.949544  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:39:20.949566  487012 retry.go:31] will retry after 364.771103ms: missing components: kube-dns
	I1201 20:39:21.217894  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:21.313014  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:21.413832  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:21.414210  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:21.416262  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:21.416300  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:21.416310  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 20:39:21.416319  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 20:39:21.416359  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 20:39:21.416365  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:21.416381  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:21.416386  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:21.416390  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:21.416396  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 20:39:21.416404  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:21.416437  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:21.416445  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 20:39:21.416456  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 20:39:21.416469  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 20:39:21.416475  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:21.416496  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 20:39:21.416503  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:21.416516  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:21.416602  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:39:21.416640  487012 retry.go:31] will retry after 330.733164ms: missing components: kube-dns
	I1201 20:39:21.711750  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:21.752189  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:21.752230  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:39:21.752240  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 20:39:21.752293  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 20:39:21.752300  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 20:39:21.752316  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:21.752342  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:21.752353  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:21.752360  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:21.752367  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 20:39:21.752376  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:21.752381  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:21.752388  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 20:39:21.752400  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 20:39:21.752419  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 20:39:21.752432  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:21.752438  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 20:39:21.752457  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:21.752470  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:21.752486  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:39:21.752506  487012 retry.go:31] will retry after 526.6024ms: missing components: kube-dns
	I1201 20:39:21.797469  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:21.830061  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:21.897398  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:22.213326  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:22.284210  487012 system_pods.go:86] 19 kube-system pods found
	I1201 20:39:22.284247  487012 system_pods.go:89] "coredns-66bc5c9577-q75zt" [86654e25-6e26-4560-8d18-004462848af1] Running
	I1201 20:39:22.284258  487012 system_pods.go:89] "csi-hostpath-attacher-0" [8882ae38-7b51-48e3-b45f-6a57e1d061a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 20:39:22.284299  487012 system_pods.go:89] "csi-hostpath-resizer-0" [ba624dda-a9cc-4957-b1e8-a3f4fce7a73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 20:39:22.284314  487012 system_pods.go:89] "csi-hostpathplugin-z8frr" [54aeb006-3353-4509-b7cf-de3d4a788010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 20:39:22.284321  487012 system_pods.go:89] "etcd-addons-947185" [3c528131-96e4-4354-85af-e7458a367454] Running
	I1201 20:39:22.284331  487012 system_pods.go:89] "kindnet-5m5nn" [ececdb4a-2857-423e-a7fe-064b8e4f4367] Running
	I1201 20:39:22.284335  487012 system_pods.go:89] "kube-apiserver-addons-947185" [7d5d681f-2541-4c55-b4ee-fadc73c99dc1] Running
	I1201 20:39:22.284340  487012 system_pods.go:89] "kube-controller-manager-addons-947185" [09c58456-d5d8-43ea-813c-6916dd523945] Running
	I1201 20:39:22.284379  487012 system_pods.go:89] "kube-ingress-dns-minikube" [b1a2f555-4a13-46f0-8cef-06487f0d428e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 20:39:22.284388  487012 system_pods.go:89] "kube-proxy-6l2m9" [8f2ae58e-c00a-4eda-8189-afd1332e44e0] Running
	I1201 20:39:22.284393  487012 system_pods.go:89] "kube-scheduler-addons-947185" [b9e3706a-7729-4d2d-b67d-63466041f58a] Running
	I1201 20:39:22.284399  487012 system_pods.go:89] "metrics-server-85b7d694d7-wwwt5" [32b3ea6f-e4c4-4e63-8992-e1371c406519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 20:39:22.284411  487012 system_pods.go:89] "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 20:39:22.284417  487012 system_pods.go:89] "registry-6b586f9694-m876b" [99b02fcf-a463-48f7-b563-a88a6be051c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 20:39:22.284430  487012 system_pods.go:89] "registry-creds-764b6fb674-qc52j" [178c8099-fe59-4a00-9d1a-a0a80a1b7d7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 20:39:22.284436  487012 system_pods.go:89] "registry-proxy-scbhm" [42f9b46b-5402-4199-a084-012a354ce2c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 20:39:22.284461  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-h8r4s" [06fd0354-f315-44f9-9068-c26a9a2b06d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:22.284474  487012 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8wng" [cb5f3819-4d80-432e-86b6-a32cd6b18a29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 20:39:22.284479  487012 system_pods.go:89] "storage-provisioner" [00707e13-d913-4314-876e-5ca4180ae588] Running
	I1201 20:39:22.284502  487012 system_pods.go:126] duration metric: took 1.652344925s to wait for k8s-apps to be running ...
	I1201 20:39:22.284518  487012 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:39:22.284590  487012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:39:22.294469  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:22.304437  487012 system_svc.go:56] duration metric: took 19.910351ms WaitForService to wait for kubelet
	I1201 20:39:22.304467  487012 kubeadm.go:587] duration metric: took 43.531298863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:39:22.304487  487012 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:39:22.308464  487012 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1201 20:39:22.308496  487012 node_conditions.go:123] node cpu capacity is 2
	I1201 20:39:22.308511  487012 node_conditions.go:105] duration metric: took 4.018973ms to run NodePressure ...
	I1201 20:39:22.308551  487012 start.go:242] waiting for startup goroutines ...
	I1201 20:39:22.324648  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:22.348839  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:22.712170  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:22.793732  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:22.824946  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:22.850359  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:23.212287  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:23.293099  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:23.325200  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:23.349395  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:23.712046  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:23.793522  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:23.824898  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:23.849087  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:24.212404  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:24.294601  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:24.325096  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:24.349259  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:24.711719  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:24.793939  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:24.825453  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:24.850021  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:25.212901  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:25.293234  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:25.327049  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:25.349535  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:25.711699  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:25.793890  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:25.825025  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:25.848934  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:26.212110  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:26.294052  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:26.325665  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:26.350154  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:26.712403  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:26.793968  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:26.825535  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:26.850142  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:27.212324  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:27.293922  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:27.325477  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:27.350302  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:27.712480  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:27.794092  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:27.825728  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:27.849472  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:28.212965  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:28.293275  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:28.325331  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:28.349330  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:28.712079  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:28.792922  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:28.824789  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:28.848862  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:29.213562  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:29.297154  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:29.328186  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:29.351688  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:29.713294  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:29.794923  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:29.828444  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:29.855295  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:30.213379  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:30.294348  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:30.324903  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:30.349081  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:30.712125  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:30.794855  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:30.825599  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:30.849708  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:31.212315  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:31.293919  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:31.325497  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:31.350394  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:31.713341  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:31.793798  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:31.841633  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:31.859654  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:32.214117  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:32.293388  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:32.325515  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:32.349393  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:32.711747  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:32.794339  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:32.825122  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:32.872971  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:33.212219  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:33.293422  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:33.324931  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:33.350098  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:33.711682  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:33.793742  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:33.824508  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:33.849269  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:34.212111  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:34.293856  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:34.324598  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:34.349417  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:34.712049  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:34.793669  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:34.824877  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:34.849127  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:35.212041  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:35.294915  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:35.328186  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:35.356702  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:35.712430  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:35.793665  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:35.824270  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:35.849589  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:36.212394  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:36.293152  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:36.324845  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:36.348883  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:36.711510  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:36.794512  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:36.825925  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:36.849285  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:37.212142  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:37.292795  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:37.325046  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:37.349673  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:37.713474  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:37.794169  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:37.826485  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:37.849921  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:38.213810  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:38.295346  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:38.324941  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:38.349392  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:38.712305  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:38.801117  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:38.826102  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:38.849858  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:39.212143  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:39.298071  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:39.325613  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:39.351330  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:39.712541  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:39.793994  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:39.825242  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:39.849196  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:40.212245  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:40.293517  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:40.324472  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:40.349464  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:40.712201  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:40.793614  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:40.824821  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:40.848977  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:41.212285  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:41.293921  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:41.325178  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:41.349075  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:41.713645  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:41.793832  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:41.825182  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:41.849401  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:42.213216  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:42.293524  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:42.328387  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:42.349539  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:42.713254  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:42.793693  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:42.825328  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:42.849761  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:43.213307  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:43.293881  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:43.325590  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:43.349609  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:43.712638  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:43.793735  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:43.824903  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:43.848946  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:44.212479  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:44.297037  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:44.397996  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:44.398490  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:44.711955  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:44.793754  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:44.825151  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:44.849812  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:45.254158  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:45.299501  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:45.341664  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:45.349161  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:45.712756  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:45.794277  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:45.825398  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:45.849552  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:46.212790  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:46.294225  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:46.325431  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:46.349383  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:46.711568  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:46.793844  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:46.825079  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:46.849025  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:47.212621  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:47.294108  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:47.325564  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:47.349516  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:47.712718  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:47.794495  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:47.824698  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:47.848877  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:48.212242  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:48.293588  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:48.324799  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:48.348770  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:48.711925  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:48.793113  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:48.825551  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:48.849752  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:49.211948  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:49.293516  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:49.324662  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:49.349939  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:49.712367  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:49.794212  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:49.825538  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:49.849384  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:50.212410  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:50.293408  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:50.325488  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:50.349349  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:50.711653  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:50.793948  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:50.825219  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:50.849455  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:51.212748  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:51.293856  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:51.325377  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:51.354922  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:51.712659  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:51.813046  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:51.824948  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:51.849847  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:52.213031  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:52.313022  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:52.325053  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:52.348968  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:52.712768  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:52.794226  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:52.825526  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:52.849894  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:53.211701  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:53.293898  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:53.325531  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:53.349973  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:53.712637  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:53.812536  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:53.825519  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:53.849898  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:54.211860  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:54.293185  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:54.325105  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:54.349196  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:54.711967  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:54.793250  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:54.825446  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:54.849787  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:55.212312  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:55.293548  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:55.324634  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:55.348827  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:55.713339  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:55.793514  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:55.825738  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:55.849325  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:56.213083  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:56.293434  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:56.324567  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:56.348820  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:56.711393  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:56.793622  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:56.824757  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:56.848766  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:57.211996  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:57.292974  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:57.324736  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:57.348710  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:57.713215  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:57.797320  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:57.825107  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 20:39:57.849996  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:58.213587  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:58.293799  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:58.327215  487012 kapi.go:107] duration metric: took 1m12.505797288s to wait for kubernetes.io/minikube-addons=registry ...
	I1201 20:39:58.348709  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:58.712483  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:58.793735  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:58.848675  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:59.211792  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:59.293828  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:59.350162  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:39:59.711579  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:39:59.793973  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:39:59.850117  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:00.246747  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:00.325410  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:00.356437  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:00.715848  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:00.797718  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:00.849649  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:01.212625  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:01.295766  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:01.358171  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:01.713625  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:01.794160  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:01.850494  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:02.212034  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:02.294083  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:02.348909  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:02.713276  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:02.793632  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:02.851193  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:03.213344  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:03.294508  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:03.350030  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:03.721230  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:03.793974  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:03.849507  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:04.212835  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:04.293710  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:04.349285  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:04.713262  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:04.793419  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:04.849907  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:05.212597  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:05.315310  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:05.350248  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:05.714839  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:05.794246  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:05.850647  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:06.213370  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:06.314088  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:06.350656  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:06.712311  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:06.793727  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:06.849783  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:07.212555  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:07.293608  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:07.349152  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:07.712929  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:07.794356  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:07.849983  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:08.213561  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:08.293999  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:08.349519  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:08.712337  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:08.793845  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:08.849299  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:09.212589  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:09.302570  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:09.397000  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:09.714509  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:09.794440  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:09.849850  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 20:40:10.212958  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:10.293518  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:10.350197  487012 kapi.go:107] duration metric: took 1m21.004526803s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1201 20:40:10.353484  487012 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-947185 cluster.
	I1201 20:40:10.356695  487012 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1201 20:40:10.360040  487012 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1201 20:40:10.712360  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:10.823395  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:11.213496  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:11.313395  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:11.712630  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:11.793861  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:12.212636  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:12.294330  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:12.713017  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:12.794407  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:13.218485  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:13.293713  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:13.713430  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:13.793922  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:14.212507  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:14.294062  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:14.713322  487012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 20:40:14.794305  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:15.300407  487012 kapi.go:107] duration metric: took 1m29.092234874s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1201 20:40:15.307486  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:15.793366  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:16.294693  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:16.793502  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:17.293110  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:17.793926  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:18.293057  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:18.793887  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:19.293902  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:19.793954  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:20.293488  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:20.794012  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:21.293780  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:21.793416  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:22.292915  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:22.793816  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:23.293707  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:23.793780  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:24.294016  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:24.793985  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:25.293459  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:25.794242  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:26.294558  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:26.793895  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:27.294111  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:27.794182  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:28.293456  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:28.793566  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:29.293495  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:29.794276  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:30.293901  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:30.793029  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:31.294625  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:31.793459  487012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 20:40:32.294190  487012 kapi.go:107] duration metric: took 1m46.504286182s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1201 20:40:32.297187  487012 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, ingress-dns, registry-creds, cloud-spanner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1201 20:40:32.299879  487012 addons.go:530] duration metric: took 1m53.526352603s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner nvidia-device-plugin ingress-dns registry-creds cloud-spanner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1201 20:40:32.299934  487012 start.go:247] waiting for cluster config update ...
	I1201 20:40:32.299957  487012 start.go:256] writing updated cluster config ...
	I1201 20:40:32.300287  487012 ssh_runner.go:195] Run: rm -f paused
	I1201 20:40:32.306714  487012 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:40:32.310377  487012 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q75zt" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.315556  487012 pod_ready.go:94] pod "coredns-66bc5c9577-q75zt" is "Ready"
	I1201 20:40:32.315588  487012 pod_ready.go:86] duration metric: took 5.181812ms for pod "coredns-66bc5c9577-q75zt" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.317913  487012 pod_ready.go:83] waiting for pod "etcd-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.325018  487012 pod_ready.go:94] pod "etcd-addons-947185" is "Ready"
	I1201 20:40:32.325050  487012 pod_ready.go:86] duration metric: took 7.110036ms for pod "etcd-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.333683  487012 pod_ready.go:83] waiting for pod "kube-apiserver-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.339917  487012 pod_ready.go:94] pod "kube-apiserver-addons-947185" is "Ready"
	I1201 20:40:32.339946  487012 pod_ready.go:86] duration metric: took 6.235543ms for pod "kube-apiserver-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.342632  487012 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.710944  487012 pod_ready.go:94] pod "kube-controller-manager-addons-947185" is "Ready"
	I1201 20:40:32.710971  487012 pod_ready.go:86] duration metric: took 368.313701ms for pod "kube-controller-manager-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:32.911879  487012 pod_ready.go:83] waiting for pod "kube-proxy-6l2m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:33.310422  487012 pod_ready.go:94] pod "kube-proxy-6l2m9" is "Ready"
	I1201 20:40:33.310452  487012 pod_ready.go:86] duration metric: took 398.547244ms for pod "kube-proxy-6l2m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:33.510929  487012 pod_ready.go:83] waiting for pod "kube-scheduler-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:33.911457  487012 pod_ready.go:94] pod "kube-scheduler-addons-947185" is "Ready"
	I1201 20:40:33.911486  487012 pod_ready.go:86] duration metric: took 400.481834ms for pod "kube-scheduler-addons-947185" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:40:33.911502  487012 pod_ready.go:40] duration metric: took 1.604749625s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:40:33.984828  487012 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1201 20:40:33.990027  487012 out.go:179] * Done! kubectl is now configured to use "addons-947185" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 20:41:16 addons-947185 crio[829]: time="2025-12-01T20:41:16.683538325Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=08e4b388-7f8b-45a7-924e-473291f7a882 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:41:16 addons-947185 crio[829]: time="2025-12-01T20:41:16.686680005Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=60dcf8ce-3dca-4e17-aa7a-56e0cb413375 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:41:16 addons-947185 crio[829]: time="2025-12-01T20:41:16.692946012Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640/helper-pod" id=d7b0b36d-3255-4a3d-9082-30d4a96418f3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:41:16 addons-947185 crio[829]: time="2025-12-01T20:41:16.693243795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:41:16 addons-947185 crio[829]: time="2025-12-01T20:41:16.700661221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:41:16 addons-947185 crio[829]: time="2025-12-01T20:41:16.701196496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:41:16 addons-947185 crio[829]: time="2025-12-01T20:41:16.719623152Z" level=info msg="Created container 75e18ab65c2bc9d9f001ea4b204e7ec046bae986c0017f518e993b29d9eda2eb: local-path-storage/helper-pod-delete-pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640/helper-pod" id=d7b0b36d-3255-4a3d-9082-30d4a96418f3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:41:16 addons-947185 crio[829]: time="2025-12-01T20:41:16.720644826Z" level=info msg="Starting container: 75e18ab65c2bc9d9f001ea4b204e7ec046bae986c0017f518e993b29d9eda2eb" id=0af71f87-eaa8-4168-943d-d5c5d1a4e463 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:41:16 addons-947185 crio[829]: time="2025-12-01T20:41:16.722865162Z" level=info msg="Started container" PID=5679 containerID=75e18ab65c2bc9d9f001ea4b204e7ec046bae986c0017f518e993b29d9eda2eb description=local-path-storage/helper-pod-delete-pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640/helper-pod id=0af71f87-eaa8-4168-943d-d5c5d1a4e463 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b1a6c828d8fca76c6bcf05aa4796eb503b6b666f6862f6a8cbede0b355a7fc1
	Dec 01 20:41:18 addons-947185 crio[829]: time="2025-12-01T20:41:18.46727206Z" level=info msg="Stopping pod sandbox: 1b1a6c828d8fca76c6bcf05aa4796eb503b6b666f6862f6a8cbede0b355a7fc1" id=d9bc5a22-ee19-4139-952f-8f9ba268054c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:41:18 addons-947185 crio[829]: time="2025-12-01T20:41:18.467577687Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640 Namespace:local-path-storage ID:1b1a6c828d8fca76c6bcf05aa4796eb503b6b666f6862f6a8cbede0b355a7fc1 UID:ce02d578-f38e-4981-9363-bfa27ee608cc NetNS:/var/run/netns/4cfe7231-0798-4b10-a358-c020183f893c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d9b0}] Aliases:map[]}"
	Dec 01 20:41:18 addons-947185 crio[829]: time="2025-12-01T20:41:18.467730939Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640 from CNI network \"kindnet\" (type=ptp)"
	Dec 01 20:41:18 addons-947185 crio[829]: time="2025-12-01T20:41:18.494686161Z" level=info msg="Stopped pod sandbox: 1b1a6c828d8fca76c6bcf05aa4796eb503b6b666f6862f6a8cbede0b355a7fc1" id=d9bc5a22-ee19-4139-952f-8f9ba268054c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.153250552Z" level=info msg="Stopping container: 840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000 (timeout: 30s)" id=8761d051-2052-4a37-8bf3-368979192847 name=/runtime.v1.RuntimeService/StopContainer
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.265093192Z" level=info msg="Stopped container 840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000: default/task-pv-pod-restore/task-pv-container" id=8761d051-2052-4a37-8bf3-368979192847 name=/runtime.v1.RuntimeService/StopContainer
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.26569976Z" level=info msg="Stopping pod sandbox: b46a74051b4f7aa4e19e508e0eb71bb3826db198dccbd84c19d89661fe614d47" id=d8c7f7e5-389f-4246-aee5-75d5123973a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.265979615Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:b46a74051b4f7aa4e19e508e0eb71bb3826db198dccbd84c19d89661fe614d47 UID:8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b NetNS:/var/run/netns/defedc56-d018-4073-b33c-f244c09b2f85 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400152b450}] Aliases:map[]}"
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.266164851Z" level=info msg="Deleting pod default_task-pv-pod-restore from CNI network \"kindnet\" (type=ptp)"
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.297629347Z" level=info msg="Stopped pod sandbox: b46a74051b4f7aa4e19e508e0eb71bb3826db198dccbd84c19d89661fe614d47" id=d8c7f7e5-389f-4246-aee5-75d5123973a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.476252997Z" level=info msg="Removing container: 75e18ab65c2bc9d9f001ea4b204e7ec046bae986c0017f518e993b29d9eda2eb" id=2582a917-c4eb-4c49-b6c9-120421311d68 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.4799168Z" level=info msg="Error loading conmon cgroup of container 75e18ab65c2bc9d9f001ea4b204e7ec046bae986c0017f518e993b29d9eda2eb: cgroup deleted" id=2582a917-c4eb-4c49-b6c9-120421311d68 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.490530912Z" level=info msg="Removed container 75e18ab65c2bc9d9f001ea4b204e7ec046bae986c0017f518e993b29d9eda2eb: local-path-storage/helper-pod-delete-pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640/helper-pod" id=2582a917-c4eb-4c49-b6c9-120421311d68 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.52529268Z" level=info msg="Removing container: 840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000" id=8f197802-d079-4a33-b5c5-69409d4c806e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.528309046Z" level=info msg="Error loading conmon cgroup of container 840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000: cgroup deleted" id=8f197802-d079-4a33-b5c5-69409d4c806e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:41:19 addons-947185 crio[829]: time="2025-12-01T20:41:19.533212142Z" level=info msg="Removed container 840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000: default/task-pv-pod-restore/task-pv-container" id=8f197802-d079-4a33-b5c5-69409d4c806e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	ff238447bcbd3       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            8 seconds ago        Exited              busybox                                  0                   224cfda986f7a       test-local-path                                              default
	03dbe7dd648c5       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            12 seconds ago       Exited              helper-pod                               0                   acdd03d1ad888       helper-pod-create-pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640   local-path-storage
	072308d36ca58       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          22 seconds ago       Exited              registry-test                            0                   b0afb16f8b7f9       registry-test                                                default
	80f560bb03773       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          45 seconds ago       Running             busybox                                  0                   5a629a96091e7       busybox                                                      default
	ef1b80bc96780       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             50 seconds ago       Running             controller                               0                   ca537e4f03d39       ingress-nginx-controller-6c8bf45fb-tsqsw                     ingress-nginx
	1a93315e27a95       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             About a minute ago   Exited              patch                                    3                   2e47de487632d       ingress-nginx-admission-patch-9mt5s                          ingress-nginx
	7b62bd9d48709       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          About a minute ago   Running             csi-snapshotter                          0                   d79d06056f1a2       csi-hostpathplugin-z8frr                                     kube-system
	29c40b113be21       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          About a minute ago   Running             csi-provisioner                          0                   d79d06056f1a2       csi-hostpathplugin-z8frr                                     kube-system
	e850c7755eb34       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   d79d06056f1a2       csi-hostpathplugin-z8frr                                     kube-system
	efdf78311fe62       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   d79d06056f1a2       csi-hostpathplugin-z8frr                                     kube-system
	2f2f59c27da37       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 About a minute ago   Running             gcp-auth                                 0                   4b4bce51fd535       gcp-auth-78565c9fb4-8vxpt                                    gcp-auth
	232ae6c256a29       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   d79d06056f1a2       csi-hostpathplugin-z8frr                                     kube-system
	d7875f3fcd966       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            About a minute ago   Running             gadget                                   0                   34ab0fff3a418       gadget-ph2zs                                                 gadget
	7a49eee06f360       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   4771b76c817dd       csi-hostpath-attacher-0                                      kube-system
	2e43602ecbbd5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   8f94bb99d6a7d       local-path-provisioner-648f6765c9-zt7fb                      local-path-storage
	6a99991f8f5f7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   511f190d7ccc8       ingress-nginx-admission-create-pqg7d                         ingress-nginx
	dea3b2ad8e17b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   0813e4130fba3       registry-proxy-scbhm                                         kube-system
	295353c277ab2       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   b6854d2b98d35       csi-hostpath-resizer-0                                       kube-system
	b322f4a7417f9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   639e05362b663       snapshot-controller-7d9fbc56b8-h8r4s                         kube-system
	dfa409f637400       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   d79d06056f1a2       csi-hostpathplugin-z8frr                                     kube-system
	ed486d82e1fa5       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   62abca8f433d7       yakd-dashboard-5ff678cb9-vsq2m                               yakd-dashboard
	7e60a35a8eba6       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   47991145d8e11       snapshot-controller-7d9fbc56b8-r8wng                         kube-system
	3f46bcefd8d83       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   12ac5a7a095bc       nvidia-device-plugin-daemonset-mm775                         kube-system
	1f7b4e9296524       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   fda14bda2fab1       cloud-spanner-emulator-5bdddb765-t5czj                       default
	58cd25bffc816       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   54033b7e1fe3b       registry-6b586f9694-m876b                                    kube-system
	361dc81943838       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   14c5179911c2d       metrics-server-85b7d694d7-wwwt5                              kube-system
	9f83ec5f5e551       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   19f4a67b0d738       kube-ingress-dns-minikube                                    kube-system
	2355b41e2da84       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             2 minutes ago        Running             coredns                                  0                   2dd71ab78b4d8       coredns-66bc5c9577-q75zt                                     kube-system
	1837dcaf5caf8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             2 minutes ago        Running             storage-provisioner                      0                   a4b7aa48089db       storage-provisioner                                          kube-system
	95ac3b0ee00d6       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   49dc58cf3ba51       kube-proxy-6l2m9                                             kube-system
	53fd34a71ad26       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   09f87990807d8       kindnet-5m5nn                                                kube-system
	913315b106bf8       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   625ee107f8cbb       kube-scheduler-addons-947185                                 kube-system
	d708a60b3df7c       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   ac9bd49cc73a8       kube-apiserver-addons-947185                                 kube-system
	2608ffb63d779       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   26573ac936339       kube-controller-manager-addons-947185                        kube-system
	969d358cb0a5c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   3da43ff1b8ccc       etcd-addons-947185                                           kube-system
	
	
	==> coredns [2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440] <==
	[INFO] 10.244.0.8:47227 - 20173 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003448547s
	[INFO] 10.244.0.8:47227 - 59811 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000125962s
	[INFO] 10.244.0.8:47227 - 26837 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000091494s
	[INFO] 10.244.0.8:46888 - 44968 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000181584s
	[INFO] 10.244.0.8:46888 - 45198 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000251375s
	[INFO] 10.244.0.8:58537 - 13386 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000135595s
	[INFO] 10.244.0.8:58537 - 13181 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128883s
	[INFO] 10.244.0.8:54084 - 17590 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096474s
	[INFO] 10.244.0.8:54084 - 17401 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068995s
	[INFO] 10.244.0.8:49262 - 16235 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001267979s
	[INFO] 10.244.0.8:49262 - 16440 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001488028s
	[INFO] 10.244.0.8:60073 - 29478 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000200094s
	[INFO] 10.244.0.8:60073 - 29330 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146763s
	[INFO] 10.244.0.20:37945 - 17659 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000175299s
	[INFO] 10.244.0.20:48842 - 21247 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213149s
	[INFO] 10.244.0.20:36111 - 21050 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161761s
	[INFO] 10.244.0.20:39336 - 33750 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000069619s
	[INFO] 10.244.0.20:53133 - 8518 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144514s
	[INFO] 10.244.0.20:33714 - 49444 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077102s
	[INFO] 10.244.0.20:56722 - 46483 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002660862s
	[INFO] 10.244.0.20:50843 - 41739 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001726308s
	[INFO] 10.244.0.20:34480 - 40894 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000717147s
	[INFO] 10.244.0.20:35932 - 33776 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001442391s
	[INFO] 10.244.0.24:59064 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000218646s
	[INFO] 10.244.0.24:48716 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000481032s
	
	
	==> describe nodes <==
	Name:               addons-947185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-947185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=addons-947185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_38_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-947185
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-947185"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:38:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-947185
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:41:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:41:06 +0000   Mon, 01 Dec 2025 20:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:41:06 +0000   Mon, 01 Dec 2025 20:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:41:06 +0000   Mon, 01 Dec 2025 20:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:41:06 +0000   Mon, 01 Dec 2025 20:39:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-947185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                904801a4-17c3-4e2b-995e-dac559f4bfd9
	  Boot ID:                    06dea43b-2aa1-4726-8bb8-0a198189349a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  default                     cloud-spanner-emulator-5bdddb765-t5czj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  gadget                      gadget-ph2zs                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  gcp-auth                    gcp-auth-78565c9fb4-8vxpt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-tsqsw    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m37s
	  kube-system                 coredns-66bc5c9577-q75zt                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m44s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 csi-hostpathplugin-z8frr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 etcd-addons-947185                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m49s
	  kube-system                 kindnet-5m5nn                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m44s
	  kube-system                 kube-apiserver-addons-947185                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 kube-controller-manager-addons-947185       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 kube-proxy-6l2m9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-scheduler-addons-947185                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 metrics-server-85b7d694d7-wwwt5             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m38s
	  kube-system                 nvidia-device-plugin-daemonset-mm775        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-6b586f9694-m876b                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 registry-creds-764b6fb674-qc52j             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 registry-proxy-scbhm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-h8r4s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-r8wng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  local-path-storage          local-path-provisioner-648f6765c9-zt7fb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vsq2m              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m41s                  kube-proxy       
	  Warning  CgroupV1                 2m56s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m56s (x8 over 2m56s)  kubelet          Node addons-947185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m56s (x8 over 2m56s)  kubelet          Node addons-947185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m56s (x8 over 2m56s)  kubelet          Node addons-947185 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m49s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m49s                  kubelet          Node addons-947185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s                  kubelet          Node addons-947185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s                  kubelet          Node addons-947185 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m45s                  node-controller  Node addons-947185 event: Registered Node addons-947185 in Controller
	  Normal   NodeReady                2m2s                   kubelet          Node addons-947185 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d] <==
	{"level":"warn","ts":"2025-12-01T20:38:29.468318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.487970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.504352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.529163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.543969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.556086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.579628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.593713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.612364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.631941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.648646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.663992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.687285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.712266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.738167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.761286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.782982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.799553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:29.899387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:46.350072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:38:46.355371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:39:07.723648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:39:07.738290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:39:07.771314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:39:07.786542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53026","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [2f2f59c27da378ee11ed11fde00c3a0effbb0dd7a3c3b6badb5ec864517fb892] <==
	2025/12/01 20:40:09 GCP Auth Webhook started!
	2025/12/01 20:40:34 Ready to marshal response ...
	2025/12/01 20:40:34 Ready to write response ...
	2025/12/01 20:40:34 Ready to marshal response ...
	2025/12/01 20:40:34 Ready to write response ...
	2025/12/01 20:40:34 Ready to marshal response ...
	2025/12/01 20:40:34 Ready to write response ...
	2025/12/01 20:40:52 Ready to marshal response ...
	2025/12/01 20:40:52 Ready to write response ...
	2025/12/01 20:40:54 Ready to marshal response ...
	2025/12/01 20:40:54 Ready to write response ...
	2025/12/01 20:41:08 Ready to marshal response ...
	2025/12/01 20:41:08 Ready to write response ...
	2025/12/01 20:41:08 Ready to marshal response ...
	2025/12/01 20:41:08 Ready to write response ...
	2025/12/01 20:41:12 Ready to marshal response ...
	2025/12/01 20:41:12 Ready to write response ...
	2025/12/01 20:41:16 Ready to marshal response ...
	2025/12/01 20:41:16 Ready to write response ...
	
	
	==> kernel <==
	 20:41:22 up  2:23,  0 user,  load average: 1.53, 1.68, 2.21
	Linux addons-947185 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac] <==
	I1201 20:39:19.723390       1 main.go:301] handling current node
	I1201 20:39:29.727630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:39:29.727662       1 main.go:301] handling current node
	I1201 20:39:39.720602       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:39:39.720632       1 main.go:301] handling current node
	I1201 20:39:49.723304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:39:49.723338       1 main.go:301] handling current node
	I1201 20:39:59.721099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:39:59.721179       1 main.go:301] handling current node
	I1201 20:40:09.725047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:40:09.725087       1 main.go:301] handling current node
	I1201 20:40:19.721313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:40:19.721366       1 main.go:301] handling current node
	I1201 20:40:29.723236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:40:29.723272       1 main.go:301] handling current node
	I1201 20:40:39.720584       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:40:39.720665       1 main.go:301] handling current node
	I1201 20:40:49.720688       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:40:49.720801       1 main.go:301] handling current node
	I1201 20:40:59.720569       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:40:59.720602       1 main.go:301] handling current node
	I1201 20:41:09.721561       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:41:09.721592       1 main.go:301] handling current node
	I1201 20:41:19.721236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:41:19.721277       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef] <==
	I1201 20:38:49.187416       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.190.154"}
	W1201 20:39:07.723388       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1201 20:39:07.737910       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1201 20:39:07.771002       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1201 20:39:07.786531       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1201 20:39:20.277252       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.190.154:443: connect: connection refused
	E1201 20:39:20.277299       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.190.154:443: connect: connection refused" logger="UnhandledError"
	W1201 20:39:20.277938       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.190.154:443: connect: connection refused
	E1201 20:39:20.277975       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.190.154:443: connect: connection refused" logger="UnhandledError"
	W1201 20:39:20.381602       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.190.154:443: connect: connection refused
	E1201 20:39:20.381643       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.190.154:443: connect: connection refused" logger="UnhandledError"
	E1201 20:39:32.916197       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.104:443: connect: connection refused" logger="UnhandledError"
	W1201 20:39:32.916754       1 handler_proxy.go:99] no RequestInfo found in the context
	E1201 20:39:32.916914       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1201 20:39:32.918819       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.104:443: connect: connection refused" logger="UnhandledError"
	E1201 20:39:32.952286       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.104:443: connect: connection refused" logger="UnhandledError"
	E1201 20:39:32.983500       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.104:443: connect: connection refused" logger="UnhandledError"
	I1201 20:39:33.124405       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1201 20:40:43.058412       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39032: use of closed network connection
	E1201 20:40:43.309193       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39050: use of closed network connection
	E1201 20:40:43.447773       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39076: use of closed network connection
	I1201 20:41:03.576658       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be] <==
	I1201 20:38:37.751999       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1201 20:38:37.758790       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 20:38:37.758895       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1201 20:38:37.759865       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1201 20:38:37.759887       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1201 20:38:37.759897       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 20:38:37.759908       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 20:38:37.760589       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 20:38:37.760616       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 20:38:37.760628       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 20:38:37.760635       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 20:38:37.773384       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1201 20:38:37.775560       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-947185" podCIDRs=["10.244.0.0/24"]
	I1201 20:38:37.798013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:38:37.798060       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 20:38:37.798069       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1201 20:38:44.420067       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1201 20:39:07.716479       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1201 20:39:07.716643       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1201 20:39:07.716701       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1201 20:39:07.760058       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1201 20:39:07.764406       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1201 20:39:07.817688       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:39:07.864865       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:39:22.759013       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9] <==
	I1201 20:38:40.841865       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:38:40.924826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:38:41.025848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:38:41.025888       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1201 20:38:41.025971       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:38:41.069191       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:38:41.069249       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:38:41.076560       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:38:41.077083       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:38:41.077100       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:38:41.084882       1 config.go:200] "Starting service config controller"
	I1201 20:38:41.084902       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:38:41.084929       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:38:41.084934       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:38:41.084947       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:38:41.084951       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:38:41.085599       1 config.go:309] "Starting node config controller"
	I1201 20:38:41.085607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:38:41.085613       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:38:41.185484       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:38:41.185522       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:38:41.185587       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d] <==
	E1201 20:38:30.760170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 20:38:30.760309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:38:30.760426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:38:30.760550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 20:38:30.760692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 20:38:30.760809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:38:30.761015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:38:30.761159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:38:30.761533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 20:38:30.761616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 20:38:30.761639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 20:38:30.761687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 20:38:30.761707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 20:38:31.594141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 20:38:31.599479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 20:38:31.607464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 20:38:31.639108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:38:31.699851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 20:38:31.773358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 20:38:31.837362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 20:38:31.909235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 20:38:31.933096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:38:32.022716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:38:32.265635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1201 20:38:35.143597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:41:18 addons-947185 kubelet[1281]: I1201 20:41:18.595777    1281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce02d578-f38e-4981-9363-bfa27ee608cc-data" (OuterVolumeSpecName: "data") pod "ce02d578-f38e-4981-9363-bfa27ee608cc" (UID: "ce02d578-f38e-4981-9363-bfa27ee608cc"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 01 20:41:18 addons-947185 kubelet[1281]: I1201 20:41:18.596042    1281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce02d578-f38e-4981-9363-bfa27ee608cc-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ce02d578-f38e-4981-9363-bfa27ee608cc" (UID: "ce02d578-f38e-4981-9363-bfa27ee608cc"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 01 20:41:18 addons-947185 kubelet[1281]: I1201 20:41:18.602056    1281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce02d578-f38e-4981-9363-bfa27ee608cc-kube-api-access-jtcv5" (OuterVolumeSpecName: "kube-api-access-jtcv5") pod "ce02d578-f38e-4981-9363-bfa27ee608cc" (UID: "ce02d578-f38e-4981-9363-bfa27ee608cc"). InnerVolumeSpecName "kube-api-access-jtcv5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 01 20:41:18 addons-947185 kubelet[1281]: I1201 20:41:18.696066    1281 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ce02d578-f38e-4981-9363-bfa27ee608cc-script\") on node \"addons-947185\" DevicePath \"\""
	Dec 01 20:41:18 addons-947185 kubelet[1281]: I1201 20:41:18.696120    1281 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/ce02d578-f38e-4981-9363-bfa27ee608cc-data\") on node \"addons-947185\" DevicePath \"\""
	Dec 01 20:41:18 addons-947185 kubelet[1281]: I1201 20:41:18.696133    1281 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jtcv5\" (UniqueName: \"kubernetes.io/projected/ce02d578-f38e-4981-9363-bfa27ee608cc-kube-api-access-jtcv5\") on node \"addons-947185\" DevicePath \"\""
	Dec 01 20:41:18 addons-947185 kubelet[1281]: I1201 20:41:18.696145    1281 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ce02d578-f38e-4981-9363-bfa27ee608cc-gcp-creds\") on node \"addons-947185\" DevicePath \"\""
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.402355    1281 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^128e22da-cef6-11f0-97b6-86fc691d74f7\") pod \"8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b\" (UID: \"8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b\") "
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.402588    1281 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dv8h\" (UniqueName: \"kubernetes.io/projected/8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b-kube-api-access-2dv8h\") pod \"8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b\" (UID: \"8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b\") "
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.402692    1281 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b-gcp-creds\") pod \"8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b\" (UID: \"8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b\") "
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.402987    1281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b" (UID: "8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.406455    1281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^128e22da-cef6-11f0-97b6-86fc691d74f7" (OuterVolumeSpecName: "task-pv-storage") pod "8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b" (UID: "8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b"). InnerVolumeSpecName "pvc-eb69ae76-a90d-438d-b15a-592334b5e3d1". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.407723    1281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b-kube-api-access-2dv8h" (OuterVolumeSpecName: "kube-api-access-2dv8h") pod "8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b" (UID: "8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b"). InnerVolumeSpecName "kube-api-access-2dv8h". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.429341    1281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce02d578-f38e-4981-9363-bfa27ee608cc" path="/var/lib/kubelet/pods/ce02d578-f38e-4981-9363-bfa27ee608cc/volumes"
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.474085    1281 scope.go:117] "RemoveContainer" containerID="75e18ab65c2bc9d9f001ea4b204e7ec046bae986c0017f518e993b29d9eda2eb"
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.491827    1281 scope.go:117] "RemoveContainer" containerID="840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000"
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.503530    1281 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b-gcp-creds\") on node \"addons-947185\" DevicePath \"\""
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.503588    1281 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-eb69ae76-a90d-438d-b15a-592334b5e3d1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^128e22da-cef6-11f0-97b6-86fc691d74f7\") on node \"addons-947185\" "
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.503603    1281 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2dv8h\" (UniqueName: \"kubernetes.io/projected/8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b-kube-api-access-2dv8h\") on node \"addons-947185\" DevicePath \"\""
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.519972    1281 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-eb69ae76-a90d-438d-b15a-592334b5e3d1" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^128e22da-cef6-11f0-97b6-86fc691d74f7") on node "addons-947185"
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.533678    1281 scope.go:117] "RemoveContainer" containerID="840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000"
	Dec 01 20:41:19 addons-947185 kubelet[1281]: E1201 20:41:19.534114    1281 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000\": container with ID starting with 840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000 not found: ID does not exist" containerID="840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000"
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.534144    1281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000"} err="failed to get container status \"840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000\": rpc error: code = NotFound desc = could not find container \"840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000\": container with ID starting with 840546f14733ff69a8380af6fe67c6da8e1452ea480e6c517077bf55e8d89000 not found: ID does not exist"
	Dec 01 20:41:19 addons-947185 kubelet[1281]: I1201 20:41:19.604680    1281 reconciler_common.go:299] "Volume detached for volume \"pvc-eb69ae76-a90d-438d-b15a-592334b5e3d1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^128e22da-cef6-11f0-97b6-86fc691d74f7\") on node \"addons-947185\" DevicePath \"\""
	Dec 01 20:41:21 addons-947185 kubelet[1281]: I1201 20:41:21.429345    1281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b" path="/var/lib/kubelet/pods/8fec89cc-87b3-431f-a2ea-cb4c1ed5e47b/volumes"
	
	
	==> storage-provisioner [1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06] <==
	W1201 20:40:57.853113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:40:59.856977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:40:59.864464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:01.869234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:01.875110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:03.878949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:03.883669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:05.887115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:05.896172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:07.899724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:07.907422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:09.911982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:09.918886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:11.922654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:11.928748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:13.933333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:13.939100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:15.947363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:15.955217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:17.958083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:17.962790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:19.966488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:19.974048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:21.979790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:41:21.996960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-947185 -n addons-947185
helpers_test.go:269: (dbg) Run:  kubectl --context addons-947185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-pqg7d ingress-nginx-admission-patch-9mt5s registry-creds-764b6fb674-qc52j
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-947185 describe pod ingress-nginx-admission-create-pqg7d ingress-nginx-admission-patch-9mt5s registry-creds-764b6fb674-qc52j
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-947185 describe pod ingress-nginx-admission-create-pqg7d ingress-nginx-admission-patch-9mt5s registry-creds-764b6fb674-qc52j: exit status 1 (97.743108ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pqg7d" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9mt5s" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-qc52j" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-947185 describe pod ingress-nginx-admission-create-pqg7d ingress-nginx-admission-patch-9mt5s registry-creds-764b6fb674-qc52j: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable headlamp --alsologtostderr -v=1: exit status 11 (281.896474ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:23.670271  494791 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:23.671214  494791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:23.671234  494791 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:23.671242  494791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:23.671544  494791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:23.671856  494791 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:23.672246  494791 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:23.672268  494791 addons.go:622] checking whether the cluster is paused
	I1201 20:41:23.672376  494791 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:23.672393  494791 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:23.672934  494791 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:23.692007  494791 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:23.692070  494791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:23.711990  494791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:23.822118  494791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:23.822221  494791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:23.860487  494791 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:23.860509  494791 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:23.860513  494791 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:23.860517  494791 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:23.860520  494791 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:23.860524  494791 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:23.860527  494791 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:23.860529  494791 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:23.860532  494791 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:23.860538  494791 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:23.860541  494791 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:23.860544  494791 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:23.860547  494791 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:23.860551  494791 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:23.860554  494791 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:23.860559  494791 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:23.860562  494791 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:23.860566  494791 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:23.860569  494791 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:23.860572  494791 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:23.860577  494791 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:23.860580  494791 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:23.860583  494791 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:23.860585  494791 cri.go:89] found id: ""
	I1201 20:41:23.860636  494791 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:23.876526  494791 out.go:203] 
	W1201 20:41:23.879452  494791 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:23.879476  494791 out.go:285] * 
	* 
	W1201 20:41:23.886033  494791 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:23.888889  494791 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-t5czj" [c0432b84-2ad5-48d5-96c0-be901af8acd8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004427742s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (330.138192ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:21.730142  494495 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:21.731010  494495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:21.731040  494495 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:21.731063  494495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:21.731401  494495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:21.731724  494495 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:21.732156  494495 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:21.732196  494495 addons.go:622] checking whether the cluster is paused
	I1201 20:41:21.732330  494495 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:21.732359  494495 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:21.732909  494495 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:21.755449  494495 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:21.755534  494495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:21.778490  494495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:21.901151  494495 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:21.901240  494495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:21.942222  494495 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:21.942247  494495 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:21.942252  494495 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:21.942256  494495 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:21.942260  494495 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:21.942265  494495 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:21.942269  494495 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:21.942273  494495 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:21.942276  494495 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:21.942294  494495 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:21.942299  494495 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:21.942303  494495 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:21.942306  494495 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:21.942318  494495 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:21.942322  494495 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:21.942332  494495 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:21.942342  494495 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:21.942347  494495 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:21.942353  494495 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:21.942356  494495 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:21.942363  494495 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:21.942367  494495 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:21.942374  494495 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:21.942377  494495 cri.go:89] found id: ""
	I1201 20:41:21.942462  494495 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:21.975428  494495 out.go:203] 
	W1201 20:41:21.981260  494495 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:21.981296  494495 out.go:285] * 
	* 
	W1201 20:41:21.989281  494495 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:21.994416  494495 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-947185 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-947185 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947185 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [524ee45b-967d-4dae-bd76-c085b906871b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [524ee45b-967d-4dae-bd76-c085b906871b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [524ee45b-967d-4dae-bd76-c085b906871b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004656776s
addons_test.go:967: (dbg) Run:  kubectl --context addons-947185 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 ssh "cat /opt/local-path-provisioner/pvc-8ec1522d-0dd4-4a4c-a6f7-cb8725038640_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-947185 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-947185 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (294.536717ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:16.425472  494049 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:16.426790  494049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:16.426812  494049 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:16.426820  494049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:16.427373  494049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:16.427945  494049 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:16.428791  494049 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:16.428821  494049 addons.go:622] checking whether the cluster is paused
	I1201 20:41:16.429016  494049 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:16.429036  494049 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:16.429838  494049 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:16.447943  494049 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:16.448021  494049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:16.470082  494049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:16.574080  494049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:16.574220  494049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:16.616575  494049 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:16.616604  494049 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:16.616609  494049 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:16.616614  494049 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:16.616617  494049 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:16.616621  494049 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:16.616625  494049 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:16.616628  494049 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:16.616632  494049 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:16.616640  494049 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:16.616643  494049 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:16.616647  494049 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:16.616651  494049 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:16.616654  494049 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:16.616658  494049 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:16.616664  494049 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:16.616667  494049 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:16.616672  494049 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:16.616676  494049 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:16.616679  494049 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:16.616685  494049 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:16.616694  494049 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:16.616698  494049 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:16.616701  494049 cri.go:89] found id: ""
	I1201 20:41:16.616759  494049 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:16.639604  494049 out.go:203] 
	W1201 20:41:16.642638  494049 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:16.642716  494049 out.go:285] * 
	* 
	W1201 20:41:16.649601  494049 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:16.652929  494049 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-mm775" [ff4d850a-4fc7-4f97-b4d7-a5fec7ea255d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.009407125s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (280.507986ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:41:07.953721  493674 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:41:07.954722  493674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:07.954871  493674 out.go:374] Setting ErrFile to fd 2...
	I1201 20:41:07.954906  493674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:41:07.955521  493674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:41:07.956247  493674 mustload.go:66] Loading cluster: addons-947185
	I1201 20:41:07.957575  493674 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:07.957607  493674 addons.go:622] checking whether the cluster is paused
	I1201 20:41:07.957751  493674 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:41:07.957771  493674 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:41:07.958307  493674 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:41:07.981608  493674 ssh_runner.go:195] Run: systemctl --version
	I1201 20:41:07.981749  493674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:41:08.001045  493674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:41:08.111096  493674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:41:08.111204  493674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:41:08.149017  493674 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:41:08.149050  493674 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:41:08.149058  493674 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:41:08.149062  493674 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:41:08.149066  493674 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:41:08.149071  493674 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:41:08.149074  493674 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:41:08.149078  493674 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:41:08.149081  493674 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:41:08.149088  493674 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:41:08.149099  493674 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:41:08.149114  493674 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:41:08.149118  493674 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:41:08.149121  493674 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:41:08.149125  493674 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:41:08.149130  493674 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:41:08.149134  493674 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:41:08.149139  493674 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:41:08.149143  493674 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:41:08.149146  493674 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:41:08.149151  493674 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:41:08.149154  493674 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:41:08.149158  493674 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:41:08.149161  493674 cri.go:89] found id: ""
	I1201 20:41:08.149216  493674 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:41:08.166473  493674 out.go:203] 
	W1201 20:41:08.169332  493674 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:41:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:41:08.169353  493674 out.go:285] * 
	* 
	W1201 20:41:08.176282  493674 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:41:08.179231  493674 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vsq2m" [8d41e1a2-71c4-48d0-aae3-4e678e6b6e81] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003046171s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-947185 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-947185 addons disable yakd --alsologtostderr -v=1: exit status 11 (268.735521ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:40:49.780196  493162 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:40:49.781021  493162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:40:49.781037  493162 out.go:374] Setting ErrFile to fd 2...
	I1201 20:40:49.781042  493162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:40:49.781330  493162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:40:49.781635  493162 mustload.go:66] Loading cluster: addons-947185
	I1201 20:40:49.782021  493162 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:40:49.782040  493162 addons.go:622] checking whether the cluster is paused
	I1201 20:40:49.782148  493162 config.go:182] Loaded profile config "addons-947185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:40:49.782163  493162 host.go:66] Checking if "addons-947185" exists ...
	I1201 20:40:49.782687  493162 cli_runner.go:164] Run: docker container inspect addons-947185 --format={{.State.Status}}
	I1201 20:40:49.810672  493162 ssh_runner.go:195] Run: systemctl --version
	I1201 20:40:49.810768  493162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-947185
	I1201 20:40:49.830432  493162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/addons-947185/id_rsa Username:docker}
	I1201 20:40:49.933762  493162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:40:49.933896  493162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:40:49.964044  493162 cri.go:89] found id: "7b62bd9d487096c579f1550339b68679be8332190765f60e06cd4937777a9df1"
	I1201 20:40:49.964069  493162 cri.go:89] found id: "29c40b113be21f8fe1bbe615bf111319d1777cc9025daf564682c1eefb3b445b"
	I1201 20:40:49.964079  493162 cri.go:89] found id: "e850c7755eb3428fe6fa7ba19c93fb7bc371967c19c4b82128cc91cb8053b5f3"
	I1201 20:40:49.964084  493162 cri.go:89] found id: "efdf78311fe62cfc0a35e43f8eeb729633a306dc0ea4ee568313518540399159"
	I1201 20:40:49.964088  493162 cri.go:89] found id: "232ae6c256a292c984f8cb48df8eceb3ee1873530d9e6f34c1a187c754908802"
	I1201 20:40:49.964092  493162 cri.go:89] found id: "7a49eee06f360dfeaf94beb2bbb4cdce7e5500414fdd2cee0ce12df2e5eb7f32"
	I1201 20:40:49.964095  493162 cri.go:89] found id: "dea3b2ad8e17be71f39c61f41026c7cb1b4623b5b887bff64c5b0486499999a1"
	I1201 20:40:49.964099  493162 cri.go:89] found id: "295353c277ab2fdf17a5bdf35885cd4aaf50e1c7a0310e8e9e47c938ee142acc"
	I1201 20:40:49.964102  493162 cri.go:89] found id: "b322f4a7417f96b30191db63c4f54268c9461124eb22cd29fa7aeee5aeec2c92"
	I1201 20:40:49.964108  493162 cri.go:89] found id: "dfa409f637400d697ead65609bdc54109d491cdce86d60e6c023d32ba59f02ae"
	I1201 20:40:49.964112  493162 cri.go:89] found id: "7e60a35a8eba6d85c1e35fe7520e0df66d2be5e95549b379c81bee82272e106c"
	I1201 20:40:49.964115  493162 cri.go:89] found id: "3f46bcefd8d83c33619ab577977393c12c9eb43945e7d3125f4e246f5b0455d5"
	I1201 20:40:49.964120  493162 cri.go:89] found id: "58cd25bffc816d350673df609f72e7f334b3ed0cfccb32cf1b2638a79781b10e"
	I1201 20:40:49.964123  493162 cri.go:89] found id: "361dc8194383806d837ada675e727c49f53ac9cfd9b315a3224ea1ce0ebfcc3b"
	I1201 20:40:49.964127  493162 cri.go:89] found id: "9f83ec5f5e5514d5a500d7b543761751c20c52d5b0c4da0872a31d0231b628fd"
	I1201 20:40:49.964132  493162 cri.go:89] found id: "2355b41e2da84e3db29da2f6728212647e392fda597ebd954072085ccc5b4440"
	I1201 20:40:49.964143  493162 cri.go:89] found id: "1837dcaf5caf8fbebc71252339be8e05fe293e1db73f148ce648a43a877e6c06"
	I1201 20:40:49.964146  493162 cri.go:89] found id: "95ac3b0ee00d6ddb757ec6c4e57282c44007d2ea906b924c19d96021bc597dd9"
	I1201 20:40:49.964150  493162 cri.go:89] found id: "53fd34a71ad2647a883f70ec1aceb708dc5a011083d943427fe324abe79d43ac"
	I1201 20:40:49.964153  493162 cri.go:89] found id: "913315b106bf848f2bc78aeee2dff59fb0d7a2768c8a5dc7d27460b0037c689d"
	I1201 20:40:49.964157  493162 cri.go:89] found id: "d708a60b3df7ced4763b714c1f1a36c6df9483c81552da97ea0386f1f248b3ef"
	I1201 20:40:49.964161  493162 cri.go:89] found id: "2608ffb63d77980a71676c95316c60a1bf74002a61cf3024ec1b056d5b0cf0be"
	I1201 20:40:49.964164  493162 cri.go:89] found id: "969d358cb0a5cd5ce66e56d51a58b46aef284ea9dc6eb5b45fbef1ed0b16310d"
	I1201 20:40:49.964167  493162 cri.go:89] found id: ""
	I1201 20:40:49.964227  493162 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:40:49.980056  493162 out.go:203] 
	W1201 20:40:49.982961  493162 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:40:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:40:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:40:49.982989  493162 out.go:285] * 
	* 
	W1201 20:40:49.990140  493162 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:40:49.993049  493162 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-947185 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-074555 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-074555 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-vrrp6" [beb37a5d-3f22-4d67-a9fb-b71a743dfc6f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074555 -n functional-074555
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-01 20:58:03.400799723 +0000 UTC m=+1228.126601231
functional_test.go:1645: (dbg) Run:  kubectl --context functional-074555 describe po hello-node-connect-7d85dfc575-vrrp6 -n default
functional_test.go:1645: (dbg) kubectl --context functional-074555 describe po hello-node-connect-7d85dfc575-vrrp6 -n default:
Name:             hello-node-connect-7d85dfc575-vrrp6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074555/192.168.49.2
Start Time:       Mon, 01 Dec 2025 20:48:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j84nc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j84nc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vrrp6 to functional-074555
Normal   Pulling    7m1s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-074555 logs hello-node-connect-7d85dfc575-vrrp6 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-074555 logs hello-node-connect-7d85dfc575-vrrp6 -n default: exit status 1 (95.482545ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vrrp6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-074555 logs hello-node-connect-7d85dfc575-vrrp6 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-074555 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-vrrp6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074555/192.168.49.2
Start Time:       Mon, 01 Dec 2025 20:48:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j84nc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j84nc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vrrp6 to functional-074555
Normal   Pulling    7m1s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-074555 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-074555 logs -l app=hello-node-connect: exit status 1 (92.754201ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vrrp6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-074555 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-074555 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.124.213
IPs:                      10.106.124.213
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32442/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-074555
helpers_test.go:243: (dbg) docker inspect functional-074555:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ea5b18d3bc04acbec9e80570d72c29d16f4016d3857786811b8a809302c9f32",
	        "Created": "2025-12-01T20:45:08.29385224Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501856,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:45:08.354670554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/9ea5b18d3bc04acbec9e80570d72c29d16f4016d3857786811b8a809302c9f32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ea5b18d3bc04acbec9e80570d72c29d16f4016d3857786811b8a809302c9f32/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ea5b18d3bc04acbec9e80570d72c29d16f4016d3857786811b8a809302c9f32/hosts",
	        "LogPath": "/var/lib/docker/containers/9ea5b18d3bc04acbec9e80570d72c29d16f4016d3857786811b8a809302c9f32/9ea5b18d3bc04acbec9e80570d72c29d16f4016d3857786811b8a809302c9f32-json.log",
	        "Name": "/functional-074555",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074555:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074555",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ea5b18d3bc04acbec9e80570d72c29d16f4016d3857786811b8a809302c9f32",
	                "LowerDir": "/var/lib/docker/overlay2/6c6e914e521db4428f96a4c2be40082c7d6f42351fd58912c29bdc4607211816-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c6e914e521db4428f96a4c2be40082c7d6f42351fd58912c29bdc4607211816/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c6e914e521db4428f96a4c2be40082c7d6f42351fd58912c29bdc4607211816/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c6e914e521db4428f96a4c2be40082c7d6f42351fd58912c29bdc4607211816/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074555",
	                "Source": "/var/lib/docker/volumes/functional-074555/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074555",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074555",
	                "name.minikube.sigs.k8s.io": "functional-074555",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11c358974a937640404055e5bc1ca6ed0ef7429bb65e6784f1a4a7902c0ffd0a",
	            "SandboxKey": "/var/run/docker/netns/11c358974a93",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074555": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:50:7e:9c:05:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a61441ef668cca1ea790b686d590a017751f6b66c47ba03fd2ecbfe469aa32d3",
	                    "EndpointID": "31070efebcfa4db7a58d0ee51234d8956e83a5bc2b9fd61a76ac1ac911d0c54d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074555",
	                        "9ea5b18d3bc0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074555 -n functional-074555
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-074555 logs -n 25: (1.507381227s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-074555 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:46 UTC │ 01 Dec 25 20:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 20:46 UTC │ 01 Dec 25 20:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 20:46 UTC │ 01 Dec 25 20:46 UTC │
	│ kubectl │ functional-074555 kubectl -- --context functional-074555 get pods                                                          │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:46 UTC │ 01 Dec 25 20:46 UTC │
	│ start   │ -p functional-074555 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:46 UTC │ 01 Dec 25 20:47 UTC │
	│ service │ invalid-svc -p functional-074555                                                                                           │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │                     │
	│ config  │ functional-074555 config unset cpus                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ cp      │ functional-074555 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ config  │ functional-074555 config get cpus                                                                                          │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │                     │
	│ config  │ functional-074555 config set cpus 2                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ config  │ functional-074555 config get cpus                                                                                          │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ config  │ functional-074555 config unset cpus                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ ssh     │ functional-074555 ssh -n functional-074555 sudo cat /home/docker/cp-test.txt                                               │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ config  │ functional-074555 config get cpus                                                                                          │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │                     │
	│ ssh     │ functional-074555 ssh echo hello                                                                                           │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ cp      │ functional-074555 cp functional-074555:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3796138896/001/cp-test.txt │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ ssh     │ functional-074555 ssh cat /etc/hostname                                                                                    │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ ssh     │ functional-074555 ssh -n functional-074555 sudo cat /home/docker/cp-test.txt                                               │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ tunnel  │ functional-074555 tunnel --alsologtostderr                                                                                 │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │                     │
	│ tunnel  │ functional-074555 tunnel --alsologtostderr                                                                                 │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │                     │
	│ cp      │ functional-074555 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ tunnel  │ functional-074555 tunnel --alsologtostderr                                                                                 │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │                     │
	│ ssh     │ functional-074555 ssh -n functional-074555 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:47 UTC │ 01 Dec 25 20:47 UTC │
	│ addons  │ functional-074555 addons list                                                                                              │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:48 UTC │ 01 Dec 25 20:48 UTC │
	│ addons  │ functional-074555 addons list -o json                                                                                      │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:48 UTC │ 01 Dec 25 20:48 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:46:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:46:58.153572  506033 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:46:58.153678  506033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:46:58.153682  506033 out.go:374] Setting ErrFile to fd 2...
	I1201 20:46:58.153686  506033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:46:58.154055  506033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:46:58.155007  506033 out.go:368] Setting JSON to false
	I1201 20:46:58.156125  506033 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8968,"bootTime":1764613051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 20:46:58.156190  506033 start.go:143] virtualization:  
	I1201 20:46:58.159709  506033 out.go:179] * [functional-074555] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 20:46:58.163559  506033 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:46:58.163761  506033 notify.go:221] Checking for updates...
	I1201 20:46:58.169376  506033 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:46:58.172393  506033 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:46:58.175289  506033 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 20:46:58.178131  506033 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 20:46:58.180873  506033 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:46:58.184251  506033 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:46:58.184345  506033 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:46:58.212585  506033 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 20:46:58.212689  506033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:46:58.293668  506033 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-12-01 20:46:58.28384805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:46:58.293757  506033 docker.go:319] overlay module found
	I1201 20:46:58.296974  506033 out.go:179] * Using the docker driver based on existing profile
	I1201 20:46:58.301186  506033 start.go:309] selected driver: docker
	I1201 20:46:58.301195  506033 start.go:927] validating driver "docker" against &{Name:functional-074555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-074555 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:46:58.301275  506033 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:46:58.301361  506033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:46:58.356627  506033 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-12-01 20:46:58.347762296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:46:58.357053  506033 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:46:58.357093  506033 cni.go:84] Creating CNI manager for ""
	I1201 20:46:58.357226  506033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:46:58.357308  506033 start.go:353] cluster config:
	{Name:functional-074555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-074555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:46:58.362178  506033 out.go:179] * Starting "functional-074555" primary control-plane node in "functional-074555" cluster
	I1201 20:46:58.365137  506033 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:46:58.368051  506033 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:46:58.370769  506033 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:46:58.370806  506033 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1201 20:46:58.370815  506033 cache.go:65] Caching tarball of preloaded images
	I1201 20:46:58.370846  506033 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:46:58.370905  506033 preload.go:238] Found /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1201 20:46:58.370913  506033 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:46:58.371022  506033 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/config.json ...
	I1201 20:46:58.391555  506033 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:46:58.391566  506033 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:46:58.391593  506033 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:46:58.391625  506033 start.go:360] acquireMachinesLock for functional-074555: {Name:mk229d272b88ea6fb795d42755301972bbd1b3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:46:58.391694  506033 start.go:364] duration metric: took 51.38µs to acquireMachinesLock for "functional-074555"
	I1201 20:46:58.391727  506033 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:46:58.391731  506033 fix.go:54] fixHost starting: 
	I1201 20:46:58.392076  506033 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
	I1201 20:46:58.408655  506033 fix.go:112] recreateIfNeeded on functional-074555: state=Running err=<nil>
	W1201 20:46:58.408675  506033 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:46:58.411866  506033 out.go:252] * Updating the running docker "functional-074555" container ...
	I1201 20:46:58.411887  506033 machine.go:94] provisionDockerMachine start ...
	I1201 20:46:58.411981  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:46:58.429194  506033 main.go:143] libmachine: Using SSH client type: native
	I1201 20:46:58.429517  506033 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33175 <nil> <nil>}
	I1201 20:46:58.429524  506033 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:46:58.583008  506033 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074555
	
	I1201 20:46:58.583023  506033 ubuntu.go:182] provisioning hostname "functional-074555"
	I1201 20:46:58.583108  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:46:58.602446  506033 main.go:143] libmachine: Using SSH client type: native
	I1201 20:46:58.602762  506033 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33175 <nil> <nil>}
	I1201 20:46:58.602770  506033 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074555 && echo "functional-074555" | sudo tee /etc/hostname
	I1201 20:46:58.764643  506033 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074555
	
	I1201 20:46:58.764713  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:46:58.782883  506033 main.go:143] libmachine: Using SSH client type: native
	I1201 20:46:58.783276  506033 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33175 <nil> <nil>}
	I1201 20:46:58.783294  506033 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074555' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074555/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074555' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:46:58.935813  506033 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:46:58.935844  506033 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 20:46:58.935867  506033 ubuntu.go:190] setting up certificates
	I1201 20:46:58.935876  506033 provision.go:84] configureAuth start
	I1201 20:46:58.935938  506033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074555
	I1201 20:46:58.956364  506033 provision.go:143] copyHostCerts
	I1201 20:46:58.956454  506033 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 20:46:58.956463  506033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 20:46:58.956533  506033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 20:46:58.956637  506033 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 20:46:58.956641  506033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 20:46:58.956666  506033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 20:46:58.956719  506033 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 20:46:58.956723  506033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 20:46:58.956746  506033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 20:46:58.956813  506033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-074555 san=[127.0.0.1 192.168.49.2 functional-074555 localhost minikube]
	I1201 20:46:59.090589  506033 provision.go:177] copyRemoteCerts
	I1201 20:46:59.090642  506033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:46:59.090684  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:46:59.110345  506033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
	I1201 20:46:59.218750  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:46:59.236632  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 20:46:59.254892  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 20:46:59.273872  506033 provision.go:87] duration metric: took 337.971828ms to configureAuth
	I1201 20:46:59.273890  506033 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:46:59.274083  506033 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:46:59.274192  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:46:59.291619  506033 main.go:143] libmachine: Using SSH client type: native
	I1201 20:46:59.291959  506033 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33175 <nil> <nil>}
	I1201 20:46:59.291971  506033 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:47:04.713569  506033 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:47:04.713582  506033 machine.go:97] duration metric: took 6.301688624s to provisionDockerMachine
	I1201 20:47:04.713598  506033 start.go:293] postStartSetup for "functional-074555" (driver="docker")
	I1201 20:47:04.713609  506033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:47:04.713686  506033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:47:04.713725  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:47:04.731003  506033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
	I1201 20:47:04.835693  506033 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:47:04.839465  506033 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:47:04.839482  506033 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:47:04.839493  506033 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 20:47:04.839556  506033 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 20:47:04.839641  506033 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 20:47:04.839722  506033 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 20:47:04.839779  506033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 20:47:04.847748  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 20:47:04.866422  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 20:47:04.885489  506033 start.go:296] duration metric: took 171.874049ms for postStartSetup
	I1201 20:47:04.885592  506033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:47:04.885639  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:47:04.904345  506033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
	I1201 20:47:05.014128  506033 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:47:05.021229  506033 fix.go:56] duration metric: took 6.629481233s for fixHost
	I1201 20:47:05.021247  506033 start.go:83] releasing machines lock for "functional-074555", held for 6.62954542s
	I1201 20:47:05.021323  506033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074555
	I1201 20:47:05.039459  506033 ssh_runner.go:195] Run: cat /version.json
	I1201 20:47:05.039494  506033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:47:05.039501  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:47:05.039544  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:47:05.060889  506033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
	I1201 20:47:05.071881  506033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
	I1201 20:47:05.162907  506033 ssh_runner.go:195] Run: systemctl --version
	I1201 20:47:05.254974  506033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:47:05.294853  506033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:47:05.299942  506033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:47:05.300005  506033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:47:05.309056  506033 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:47:05.309071  506033 start.go:496] detecting cgroup driver to use...
	I1201 20:47:05.309108  506033 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 20:47:05.309156  506033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:47:05.325027  506033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:47:05.338770  506033 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:47:05.338832  506033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:47:05.354881  506033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:47:05.368263  506033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:47:05.510472  506033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:47:05.652837  506033 docker.go:234] disabling docker service ...
	I1201 20:47:05.652920  506033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:47:05.669757  506033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:47:05.684216  506033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:47:05.836956  506033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:47:05.975788  506033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:47:05.989163  506033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:47:06.013070  506033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:47:06.013170  506033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:47:06.024264  506033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:47:06.024338  506033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:47:06.034899  506033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:47:06.045306  506033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:47:06.054800  506033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:47:06.063455  506033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:47:06.073128  506033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:47:06.082311  506033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:47:06.091861  506033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:47:06.100013  506033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:47:06.108101  506033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:47:06.255486  506033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:47:12.720342  506033 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.464831954s)
	I1201 20:47:12.720360  506033 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:47:12.720441  506033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:47:12.724492  506033 start.go:564] Will wait 60s for crictl version
	I1201 20:47:12.724553  506033 ssh_runner.go:195] Run: which crictl
	I1201 20:47:12.728644  506033 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:47:12.756903  506033 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:47:12.756990  506033 ssh_runner.go:195] Run: crio --version
	I1201 20:47:12.791751  506033 ssh_runner.go:195] Run: crio --version
	I1201 20:47:12.825738  506033 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:47:12.828865  506033 cli_runner.go:164] Run: docker network inspect functional-074555 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:47:12.845475  506033 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 20:47:12.852820  506033 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1201 20:47:12.855740  506033 kubeadm.go:884] updating cluster {Name:functional-074555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-074555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:47:12.855888  506033 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:47:12.855962  506033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:47:12.893161  506033 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:47:12.893173  506033 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:47:12.893236  506033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:47:12.919103  506033 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:47:12.919116  506033 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:47:12.919123  506033 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.2 crio true true} ...
	I1201 20:47:12.919257  506033 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-074555 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-074555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:47:12.919340  506033 ssh_runner.go:195] Run: crio config
	I1201 20:47:12.981764  506033 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1201 20:47:12.981786  506033 cni.go:84] Creating CNI manager for ""
	I1201 20:47:12.981794  506033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:47:12.981810  506033 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:47:12.981833  506033 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074555 NodeName:functional-074555 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:47:12.981959  506033 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-074555"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:47:12.982031  506033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:47:12.990245  506033 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:47:12.990319  506033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:47:12.998308  506033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1201 20:47:13.013707  506033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:47:13.031183  506033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1201 20:47:13.045841  506033 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:47:13.050010  506033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:47:13.185587  506033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:47:13.200077  506033 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555 for IP: 192.168.49.2
	I1201 20:47:13.200088  506033 certs.go:195] generating shared ca certs ...
	I1201 20:47:13.200103  506033 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:47:13.200243  506033 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 20:47:13.200285  506033 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 20:47:13.200291  506033 certs.go:257] generating profile certs ...
	I1201 20:47:13.200384  506033 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.key
	I1201 20:47:13.200438  506033 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/apiserver.key.d585d19d
	I1201 20:47:13.200514  506033 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/proxy-client.key
	I1201 20:47:13.200630  506033 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 20:47:13.200668  506033 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 20:47:13.200675  506033 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:47:13.200707  506033 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:47:13.200730  506033 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:47:13.200755  506033 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 20:47:13.200799  506033 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 20:47:13.201412  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:47:13.221440  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:47:13.240355  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:47:13.258606  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 20:47:13.276648  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:47:13.294217  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:47:13.311966  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:47:13.329154  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:47:13.348156  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:47:13.369567  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 20:47:13.389849  506033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 20:47:13.408895  506033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:47:13.422003  506033 ssh_runner.go:195] Run: openssl version
	I1201 20:47:13.428973  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 20:47:13.437711  506033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 20:47:13.441759  506033 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:45 /usr/share/ca-certificates/4860022.pem
	I1201 20:47:13.441827  506033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 20:47:13.484105  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:47:13.492119  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:47:13.500495  506033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:47:13.504672  506033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:47:13.504740  506033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:47:13.546642  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:47:13.556229  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 20:47:13.565153  506033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 20:47:13.569535  506033 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:45 /usr/share/ca-certificates/486002.pem
	I1201 20:47:13.569602  506033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 20:47:13.615367  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 20:47:13.623603  506033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:47:13.627651  506033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:47:13.673776  506033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:47:13.753104  506033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:47:13.844274  506033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:47:13.945493  506033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:47:14.061604  506033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:47:14.147613  506033 kubeadm.go:401] StartCluster: {Name:functional-074555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-074555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:47:14.147704  506033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:47:14.147777  506033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:47:14.201156  506033 cri.go:89] found id: "051828e0578b86b5aa7384bafb77820fd3cffe481165ad39e6fcf6e4d12e60af"
	I1201 20:47:14.201168  506033 cri.go:89] found id: "9f38ec74d26bb06b9156db59706072c29aad6cd6ac01ed9cc592011aa8bd03fc"
	I1201 20:47:14.201171  506033 cri.go:89] found id: "2202400f9cea4b1e7ff435b72ae7f8675046afa822580960524bb83f9b22ee44"
	I1201 20:47:14.201174  506033 cri.go:89] found id: "145617cc4f3ba4b42601cfe155f26d9e56f15add5f534b626b0702a70b9cbcea"
	I1201 20:47:14.201176  506033 cri.go:89] found id: "bfa0ba402af68604d224a260b8e9f1221900fca7ad26063ba759d3077eee0c86"
	I1201 20:47:14.201179  506033 cri.go:89] found id: "d76f2d4ff26e5155c66fbfefba0fccf9eafb624f1c6f0c1d94e366152c504604"
	I1201 20:47:14.201182  506033 cri.go:89] found id: "61f3f7721fdb655f3e5bca108e2a930ebb4a23f52ce03164607e76f2f26350bc"
	I1201 20:47:14.201184  506033 cri.go:89] found id: "c82d182de6b06d2ac2fc2f7437a01ba1d89b7749ee6de95cc479fb192b9d51d4"
	I1201 20:47:14.201187  506033 cri.go:89] found id: "43f459996d8ac3aa2fe1462884fd30e2be7a90127a83783cc6dcfdcebd0f9719"
	I1201 20:47:14.201193  506033 cri.go:89] found id: "6d03b8878d68847db2de67541def5c0a55aa8e649b3ae8d271ff0bd8f7a8d8c3"
	I1201 20:47:14.201212  506033 cri.go:89] found id: "6f7f030728ba978e9d1c40bc15d4e97d038f707c17ae6256a91e181759d66be8"
	I1201 20:47:14.201214  506033 cri.go:89] found id: "210f9d677a8cce9e5a0bb8b3718e5149ae90a64c7195d468fdf6d239450e77d8"
	I1201 20:47:14.201216  506033 cri.go:89] found id: "d49dafd49655e5e846be1c9f6b0037af1130a39f13c78cae254965aa8c094fe5"
	I1201 20:47:14.201227  506033 cri.go:89] found id: "cc3cb078e7b953119709607ea2fc7e61a131ba11cbdeb8254ac8ab07993f7c27"
	I1201 20:47:14.201230  506033 cri.go:89] found id: ""
	I1201 20:47:14.201295  506033 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:47:14.225255  506033 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:47:14Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:47:14.225338  506033 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:47:14.239827  506033 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:47:14.239838  506033 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:47:14.239925  506033 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:47:14.256723  506033 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:47:14.257405  506033 kubeconfig.go:125] found "functional-074555" server: "https://192.168.49.2:8441"
	I1201 20:47:14.259478  506033 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:47:14.274350  506033 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-01 20:45:16.728657853 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-01 20:47:13.042228348 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1201 20:47:14.274369  506033 kubeadm.go:1161] stopping kube-system containers ...
	I1201 20:47:14.274381  506033 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 20:47:14.274449  506033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:47:14.339976  506033 cri.go:89] found id: "051828e0578b86b5aa7384bafb77820fd3cffe481165ad39e6fcf6e4d12e60af"
	I1201 20:47:14.340000  506033 cri.go:89] found id: "9f38ec74d26bb06b9156db59706072c29aad6cd6ac01ed9cc592011aa8bd03fc"
	I1201 20:47:14.340003  506033 cri.go:89] found id: "2202400f9cea4b1e7ff435b72ae7f8675046afa822580960524bb83f9b22ee44"
	I1201 20:47:14.340006  506033 cri.go:89] found id: "145617cc4f3ba4b42601cfe155f26d9e56f15add5f534b626b0702a70b9cbcea"
	I1201 20:47:14.340009  506033 cri.go:89] found id: "bfa0ba402af68604d224a260b8e9f1221900fca7ad26063ba759d3077eee0c86"
	I1201 20:47:14.340013  506033 cri.go:89] found id: "d76f2d4ff26e5155c66fbfefba0fccf9eafb624f1c6f0c1d94e366152c504604"
	I1201 20:47:14.340026  506033 cri.go:89] found id: "61f3f7721fdb655f3e5bca108e2a930ebb4a23f52ce03164607e76f2f26350bc"
	I1201 20:47:14.340029  506033 cri.go:89] found id: "c82d182de6b06d2ac2fc2f7437a01ba1d89b7749ee6de95cc479fb192b9d51d4"
	I1201 20:47:14.340032  506033 cri.go:89] found id: "43f459996d8ac3aa2fe1462884fd30e2be7a90127a83783cc6dcfdcebd0f9719"
	I1201 20:47:14.340040  506033 cri.go:89] found id: "6d03b8878d68847db2de67541def5c0a55aa8e649b3ae8d271ff0bd8f7a8d8c3"
	I1201 20:47:14.340042  506033 cri.go:89] found id: "6f7f030728ba978e9d1c40bc15d4e97d038f707c17ae6256a91e181759d66be8"
	I1201 20:47:14.340044  506033 cri.go:89] found id: "210f9d677a8cce9e5a0bb8b3718e5149ae90a64c7195d468fdf6d239450e77d8"
	I1201 20:47:14.340046  506033 cri.go:89] found id: "d49dafd49655e5e846be1c9f6b0037af1130a39f13c78cae254965aa8c094fe5"
	I1201 20:47:14.340049  506033 cri.go:89] found id: "cc3cb078e7b953119709607ea2fc7e61a131ba11cbdeb8254ac8ab07993f7c27"
	I1201 20:47:14.340051  506033 cri.go:89] found id: ""
	I1201 20:47:14.340055  506033 cri.go:252] Stopping containers: [051828e0578b86b5aa7384bafb77820fd3cffe481165ad39e6fcf6e4d12e60af 9f38ec74d26bb06b9156db59706072c29aad6cd6ac01ed9cc592011aa8bd03fc 2202400f9cea4b1e7ff435b72ae7f8675046afa822580960524bb83f9b22ee44 145617cc4f3ba4b42601cfe155f26d9e56f15add5f534b626b0702a70b9cbcea bfa0ba402af68604d224a260b8e9f1221900fca7ad26063ba759d3077eee0c86 d76f2d4ff26e5155c66fbfefba0fccf9eafb624f1c6f0c1d94e366152c504604 61f3f7721fdb655f3e5bca108e2a930ebb4a23f52ce03164607e76f2f26350bc c82d182de6b06d2ac2fc2f7437a01ba1d89b7749ee6de95cc479fb192b9d51d4 43f459996d8ac3aa2fe1462884fd30e2be7a90127a83783cc6dcfdcebd0f9719 6d03b8878d68847db2de67541def5c0a55aa8e649b3ae8d271ff0bd8f7a8d8c3 6f7f030728ba978e9d1c40bc15d4e97d038f707c17ae6256a91e181759d66be8 210f9d677a8cce9e5a0bb8b3718e5149ae90a64c7195d468fdf6d239450e77d8 d49dafd49655e5e846be1c9f6b0037af1130a39f13c78cae254965aa8c094fe5 cc3cb078e7b953119709607ea2fc7e61a131ba11cbdeb8254ac8ab07993f7c27]
	I1201 20:47:14.340132  506033 ssh_runner.go:195] Run: which crictl
	I1201 20:47:14.350607  506033 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 051828e0578b86b5aa7384bafb77820fd3cffe481165ad39e6fcf6e4d12e60af 9f38ec74d26bb06b9156db59706072c29aad6cd6ac01ed9cc592011aa8bd03fc 2202400f9cea4b1e7ff435b72ae7f8675046afa822580960524bb83f9b22ee44 145617cc4f3ba4b42601cfe155f26d9e56f15add5f534b626b0702a70b9cbcea bfa0ba402af68604d224a260b8e9f1221900fca7ad26063ba759d3077eee0c86 d76f2d4ff26e5155c66fbfefba0fccf9eafb624f1c6f0c1d94e366152c504604 61f3f7721fdb655f3e5bca108e2a930ebb4a23f52ce03164607e76f2f26350bc c82d182de6b06d2ac2fc2f7437a01ba1d89b7749ee6de95cc479fb192b9d51d4 43f459996d8ac3aa2fe1462884fd30e2be7a90127a83783cc6dcfdcebd0f9719 6d03b8878d68847db2de67541def5c0a55aa8e649b3ae8d271ff0bd8f7a8d8c3 6f7f030728ba978e9d1c40bc15d4e97d038f707c17ae6256a91e181759d66be8 210f9d677a8cce9e5a0bb8b3718e5149ae90a64c7195d468fdf6d239450e77d8 d49dafd49655e5e846be1c9f6b0037af1130a39f13c78cae254965aa8c094fe5 cc3cb078e7b953119709607ea2fc7e61a131ba11cbdeb8254ac8ab07993f7c27
	I1201 20:47:20.403717  506033 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 051828e0578b86b5aa7384bafb77820fd3cffe481165ad39e6fcf6e4d12e60af 9f38ec74d26bb06b9156db59706072c29aad6cd6ac01ed9cc592011aa8bd03fc 2202400f9cea4b1e7ff435b72ae7f8675046afa822580960524bb83f9b22ee44 145617cc4f3ba4b42601cfe155f26d9e56f15add5f534b626b0702a70b9cbcea bfa0ba402af68604d224a260b8e9f1221900fca7ad26063ba759d3077eee0c86 d76f2d4ff26e5155c66fbfefba0fccf9eafb624f1c6f0c1d94e366152c504604 61f3f7721fdb655f3e5bca108e2a930ebb4a23f52ce03164607e76f2f26350bc c82d182de6b06d2ac2fc2f7437a01ba1d89b7749ee6de95cc479fb192b9d51d4 43f459996d8ac3aa2fe1462884fd30e2be7a90127a83783cc6dcfdcebd0f9719 6d03b8878d68847db2de67541def5c0a55aa8e649b3ae8d271ff0bd8f7a8d8c3 6f7f030728ba978e9d1c40bc15d4e97d038f707c17ae6256a91e181759d66be8 210f9d677a8cce9e5a0bb8b3718e5149ae90a64c7195d468fdf6d239450e77d8 d49dafd49655e5e846be1c9f6b0037af1130a39f13c78cae254965aa8c094fe5 cc3cb078e7b953119709607ea2fc7e61a131ba11cbdeb8254ac8ab07993f7c27:
(6.053070303s)
	I1201 20:47:20.403776  506033 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 20:47:20.518230  506033 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:47:20.526660  506033 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  1 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec  1 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Dec  1 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  1 20:45 /etc/kubernetes/scheduler.conf
	
	I1201 20:47:20.526735  506033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 20:47:20.534942  506033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 20:47:20.543017  506033 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:47:20.543078  506033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:47:20.550713  506033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 20:47:20.558555  506033 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:47:20.558615  506033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:47:20.567948  506033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 20:47:20.576642  506033 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:47:20.576696  506033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:47:20.585089  506033 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:47:20.593573  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:47:20.642444  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:47:23.418695  506033 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.776225798s)
	I1201 20:47:23.418756  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:47:23.644364  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:47:23.700800  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:47:23.764000  506033 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:47:23.764069  506033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:47:24.264361  506033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:47:24.764192  506033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:47:24.784712  506033 api_server.go:72] duration metric: took 1.020710616s to wait for apiserver process to appear ...
	I1201 20:47:24.784728  506033 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:47:24.784746  506033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1201 20:47:27.714873  506033 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:47:27.714890  506033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:47:27.714910  506033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1201 20:47:27.786800  506033 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:47:27.786817  506033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:47:27.786829  506033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1201 20:47:27.929640  506033 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:47:27.929657  506033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:47:28.285348  506033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1201 20:47:28.293647  506033 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:47:28.293667  506033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:47:28.785346  506033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1201 20:47:28.794532  506033 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:47:28.794553  506033 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:47:29.285118  506033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1201 20:47:29.293749  506033 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1201 20:47:29.315513  506033 api_server.go:141] control plane version: v1.34.2
	I1201 20:47:29.315531  506033 api_server.go:131] duration metric: took 4.530798117s to wait for apiserver health ...
	I1201 20:47:29.315540  506033 cni.go:84] Creating CNI manager for ""
	I1201 20:47:29.315546  506033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:47:29.320384  506033 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 20:47:29.323368  506033 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 20:47:29.327717  506033 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1201 20:47:29.327728  506033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 20:47:29.345387  506033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 20:47:29.848226  506033 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:47:29.853447  506033 system_pods.go:59] 8 kube-system pods found
	I1201 20:47:29.853473  506033 system_pods.go:61] "coredns-66bc5c9577-557sq" [dd26afa3-8f38-456f-8855-c86850576534] Running
	I1201 20:47:29.853484  506033 system_pods.go:61] "etcd-functional-074555" [3c0bec6e-af98-4784-b6cb-680d7811b6b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:47:29.853490  506033 system_pods.go:61] "kindnet-pms5n" [08a65ac2-e547-42f7-9b28-5fa07ea94b8a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1201 20:47:29.853499  506033 system_pods.go:61] "kube-apiserver-functional-074555" [8bf9cf68-e1de-4ed8-8e33-05bef3e70862] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:47:29.853504  506033 system_pods.go:61] "kube-controller-manager-functional-074555" [db5354a5-5f16-41ac-829e-45c61fd35f86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:47:29.853510  506033 system_pods.go:61] "kube-proxy-f7bq5" [700bd810-a999-4871-824f-32ae4e35eafb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1201 20:47:29.853515  506033 system_pods.go:61] "kube-scheduler-functional-074555" [7a51717b-2d72-4eb0-aff8-210071367e8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:47:29.853522  506033 system_pods.go:61] "storage-provisioner" [4106e937-5606-438b-bc9e-b9c8b32f2832] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:47:29.853527  506033 system_pods.go:74] duration metric: took 5.29017ms to wait for pod list to return data ...
	I1201 20:47:29.853534  506033 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:47:29.857264  506033 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1201 20:47:29.857285  506033 node_conditions.go:123] node cpu capacity is 2
	I1201 20:47:29.857297  506033 node_conditions.go:105] duration metric: took 3.759576ms to run NodePressure ...
	I1201 20:47:29.857358  506033 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:47:30.201959  506033 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1201 20:47:30.205327  506033 kubeadm.go:744] kubelet initialised
	I1201 20:47:30.205339  506033 kubeadm.go:745] duration metric: took 3.366469ms waiting for restarted kubelet to initialise ...
	I1201 20:47:30.205354  506033 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 20:47:30.215092  506033 ops.go:34] apiserver oom_adj: -16
	I1201 20:47:30.215112  506033 kubeadm.go:602] duration metric: took 15.975268179s to restartPrimaryControlPlane
	I1201 20:47:30.215122  506033 kubeadm.go:403] duration metric: took 16.067519905s to StartCluster
	I1201 20:47:30.215163  506033 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:47:30.215241  506033 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:47:30.215876  506033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:47:30.216127  506033 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:47:30.216393  506033 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:47:30.216432  506033 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:47:30.216492  506033 addons.go:70] Setting storage-provisioner=true in profile "functional-074555"
	I1201 20:47:30.216515  506033 addons.go:239] Setting addon storage-provisioner=true in "functional-074555"
	W1201 20:47:30.216520  506033 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:47:30.216544  506033 host.go:66] Checking if "functional-074555" exists ...
	I1201 20:47:30.216972  506033 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
	I1201 20:47:30.217476  506033 addons.go:70] Setting default-storageclass=true in profile "functional-074555"
	I1201 20:47:30.217495  506033 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-074555"
	I1201 20:47:30.217824  506033 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
	I1201 20:47:30.221999  506033 out.go:179] * Verifying Kubernetes components...
	I1201 20:47:30.226923  506033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:47:30.257158  506033 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:47:30.258810  506033 addons.go:239] Setting addon default-storageclass=true in "functional-074555"
	W1201 20:47:30.258853  506033 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:47:30.258880  506033 host.go:66] Checking if "functional-074555" exists ...
	I1201 20:47:30.259842  506033 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
	I1201 20:47:30.260289  506033 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:47:30.260305  506033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:47:30.260410  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:47:30.282079  506033 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:47:30.282094  506033 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:47:30.282163  506033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:47:30.289381  506033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
	I1201 20:47:30.319561  506033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
	I1201 20:47:30.453535  506033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:47:30.474163  506033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:47:30.478004  506033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:47:31.371892  506033 node_ready.go:35] waiting up to 6m0s for node "functional-074555" to be "Ready" ...
	I1201 20:47:31.380593  506033 node_ready.go:49] node "functional-074555" is "Ready"
	I1201 20:47:31.380612  506033 node_ready.go:38] duration metric: took 8.688728ms for node "functional-074555" to be "Ready" ...
	I1201 20:47:31.380626  506033 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:47:31.380694  506033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:47:31.386896  506033 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1201 20:47:31.389744  506033 addons.go:530] duration metric: took 1.173300902s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1201 20:47:31.394957  506033 api_server.go:72] duration metric: took 1.178803851s to wait for apiserver process to appear ...
	I1201 20:47:31.394971  506033 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:47:31.394988  506033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1201 20:47:31.408731  506033 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1201 20:47:31.410119  506033 api_server.go:141] control plane version: v1.34.2
	I1201 20:47:31.410135  506033 api_server.go:131] duration metric: took 15.158722ms to wait for apiserver health ...
	I1201 20:47:31.410144  506033 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:47:31.417218  506033 system_pods.go:59] 8 kube-system pods found
	I1201 20:47:31.417236  506033 system_pods.go:61] "coredns-66bc5c9577-557sq" [dd26afa3-8f38-456f-8855-c86850576534] Running
	I1201 20:47:31.417244  506033 system_pods.go:61] "etcd-functional-074555" [3c0bec6e-af98-4784-b6cb-680d7811b6b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:47:31.417249  506033 system_pods.go:61] "kindnet-pms5n" [08a65ac2-e547-42f7-9b28-5fa07ea94b8a] Running
	I1201 20:47:31.417256  506033 system_pods.go:61] "kube-apiserver-functional-074555" [8bf9cf68-e1de-4ed8-8e33-05bef3e70862] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:47:31.417262  506033 system_pods.go:61] "kube-controller-manager-functional-074555" [db5354a5-5f16-41ac-829e-45c61fd35f86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:47:31.417266  506033 system_pods.go:61] "kube-proxy-f7bq5" [700bd810-a999-4871-824f-32ae4e35eafb] Running
	I1201 20:47:31.417271  506033 system_pods.go:61] "kube-scheduler-functional-074555" [7a51717b-2d72-4eb0-aff8-210071367e8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:47:31.417277  506033 system_pods.go:61] "storage-provisioner" [4106e937-5606-438b-bc9e-b9c8b32f2832] Running
	I1201 20:47:31.417282  506033 system_pods.go:74] duration metric: took 7.132609ms to wait for pod list to return data ...
	I1201 20:47:31.417289  506033 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:47:31.420370  506033 default_sa.go:45] found service account: "default"
	I1201 20:47:31.420386  506033 default_sa.go:55] duration metric: took 3.091849ms for default service account to be created ...
	I1201 20:47:31.420394  506033 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:47:31.423659  506033 system_pods.go:86] 8 kube-system pods found
	I1201 20:47:31.423676  506033 system_pods.go:89] "coredns-66bc5c9577-557sq" [dd26afa3-8f38-456f-8855-c86850576534] Running
	I1201 20:47:31.423685  506033 system_pods.go:89] "etcd-functional-074555" [3c0bec6e-af98-4784-b6cb-680d7811b6b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:47:31.423689  506033 system_pods.go:89] "kindnet-pms5n" [08a65ac2-e547-42f7-9b28-5fa07ea94b8a] Running
	I1201 20:47:31.423696  506033 system_pods.go:89] "kube-apiserver-functional-074555" [8bf9cf68-e1de-4ed8-8e33-05bef3e70862] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:47:31.423701  506033 system_pods.go:89] "kube-controller-manager-functional-074555" [db5354a5-5f16-41ac-829e-45c61fd35f86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:47:31.423705  506033 system_pods.go:89] "kube-proxy-f7bq5" [700bd810-a999-4871-824f-32ae4e35eafb] Running
	I1201 20:47:31.423710  506033 system_pods.go:89] "kube-scheduler-functional-074555" [7a51717b-2d72-4eb0-aff8-210071367e8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:47:31.423713  506033 system_pods.go:89] "storage-provisioner" [4106e937-5606-438b-bc9e-b9c8b32f2832] Running
	I1201 20:47:31.423718  506033 system_pods.go:126] duration metric: took 3.319561ms to wait for k8s-apps to be running ...
	I1201 20:47:31.423724  506033 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:47:31.423783  506033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:47:31.439932  506033 system_svc.go:56] duration metric: took 16.197895ms WaitForService to wait for kubelet
	I1201 20:47:31.439952  506033 kubeadm.go:587] duration metric: took 1.223801494s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:47:31.439970  506033 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:47:31.443375  506033 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1201 20:47:31.443402  506033 node_conditions.go:123] node cpu capacity is 2
	I1201 20:47:31.443412  506033 node_conditions.go:105] duration metric: took 3.43795ms to run NodePressure ...
	I1201 20:47:31.443424  506033 start.go:242] waiting for startup goroutines ...
	I1201 20:47:31.443430  506033 start.go:247] waiting for cluster config update ...
	I1201 20:47:31.443441  506033 start.go:256] writing updated cluster config ...
	I1201 20:47:31.443793  506033 ssh_runner.go:195] Run: rm -f paused
	I1201 20:47:31.454400  506033 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:47:31.458202  506033 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-557sq" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:31.465855  506033 pod_ready.go:94] pod "coredns-66bc5c9577-557sq" is "Ready"
	I1201 20:47:31.465880  506033 pod_ready.go:86] duration metric: took 7.654034ms for pod "coredns-66bc5c9577-557sq" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:31.518542  506033 pod_ready.go:83] waiting for pod "etcd-functional-074555" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 20:47:33.524676  506033 pod_ready.go:104] pod "etcd-functional-074555" is not "Ready", error: <nil>
	W1201 20:47:36.025822  506033 pod_ready.go:104] pod "etcd-functional-074555" is not "Ready", error: <nil>
	W1201 20:47:38.524904  506033 pod_ready.go:104] pod "etcd-functional-074555" is not "Ready", error: <nil>
	W1201 20:47:41.024916  506033 pod_ready.go:104] pod "etcd-functional-074555" is not "Ready", error: <nil>
	I1201 20:47:42.526085  506033 pod_ready.go:94] pod "etcd-functional-074555" is "Ready"
	I1201 20:47:42.526100  506033 pod_ready.go:86] duration metric: took 11.007543681s for pod "etcd-functional-074555" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:42.528950  506033 pod_ready.go:83] waiting for pod "kube-apiserver-functional-074555" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:42.539066  506033 pod_ready.go:94] pod "kube-apiserver-functional-074555" is "Ready"
	I1201 20:47:42.539081  506033 pod_ready.go:86] duration metric: took 10.117703ms for pod "kube-apiserver-functional-074555" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:42.542321  506033 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-074555" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:42.548043  506033 pod_ready.go:94] pod "kube-controller-manager-functional-074555" is "Ready"
	I1201 20:47:42.548059  506033 pod_ready.go:86] duration metric: took 5.722694ms for pod "kube-controller-manager-functional-074555" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:42.550493  506033 pod_ready.go:83] waiting for pod "kube-proxy-f7bq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:42.723200  506033 pod_ready.go:94] pod "kube-proxy-f7bq5" is "Ready"
	I1201 20:47:42.723214  506033 pod_ready.go:86] duration metric: took 172.708779ms for pod "kube-proxy-f7bq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:42.922459  506033 pod_ready.go:83] waiting for pod "kube-scheduler-functional-074555" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:43.322840  506033 pod_ready.go:94] pod "kube-scheduler-functional-074555" is "Ready"
	I1201 20:47:43.322853  506033 pod_ready.go:86] duration metric: took 400.381726ms for pod "kube-scheduler-functional-074555" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:47:43.322863  506033 pod_ready.go:40] duration metric: took 11.868439793s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:47:43.379540  506033 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1201 20:47:43.382517  506033 out.go:179] * Done! kubectl is now configured to use "functional-074555" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 20:48:19 functional-074555 crio[3549]: time="2025-12-01T20:48:19.82936509Z" level=info msg="Checking pod default_hello-node-75c85bcc94-hhgj2 for CNI network kindnet (type=ptp)"
	Dec 01 20:48:19 functional-074555 crio[3549]: time="2025-12-01T20:48:19.832294447Z" level=info msg="Ran pod sandbox 2dfef6d88d800f0caa509167394fe179997d0c4b4e487626b5869be024cd02f8 with infra container: default/hello-node-75c85bcc94-hhgj2/POD" id=47c30b84-12c0-4df1-8164-2598390b37f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:48:19 functional-074555 crio[3549]: time="2025-12-01T20:48:19.834986887Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6ccb956b-7f74-4921-869f-9d2ef4299328 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:48:20 functional-074555 crio[3549]: time="2025-12-01T20:48:20.795964051Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4d0710d9-b859-47b2-b186-480e37403e18 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.833985884Z" level=info msg="Stopping pod sandbox: 758ebbd398bbaee6aa56a4c0e48cad94f37d1c978788e3fe293aecc3fbe09c58" id=b741a800-8d3b-44ee-b457-0dce8b7e927f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.834049079Z" level=info msg="Stopped pod sandbox (already stopped): 758ebbd398bbaee6aa56a4c0e48cad94f37d1c978788e3fe293aecc3fbe09c58" id=b741a800-8d3b-44ee-b457-0dce8b7e927f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.834788887Z" level=info msg="Removing pod sandbox: 758ebbd398bbaee6aa56a4c0e48cad94f37d1c978788e3fe293aecc3fbe09c58" id=97c37ffb-5475-4fa5-a9e7-fea18d10ffe7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.838712849Z" level=info msg="Removed pod sandbox: 758ebbd398bbaee6aa56a4c0e48cad94f37d1c978788e3fe293aecc3fbe09c58" id=97c37ffb-5475-4fa5-a9e7-fea18d10ffe7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.839379165Z" level=info msg="Stopping pod sandbox: e4e57fc1000d0bc7a58481ddf44d119d2c18ed749cdea2a299372c23bca2b8b7" id=6e1c7ed1-7f64-4b9f-9680-65819d6dfb2e name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.839431398Z" level=info msg="Stopped pod sandbox (already stopped): e4e57fc1000d0bc7a58481ddf44d119d2c18ed749cdea2a299372c23bca2b8b7" id=6e1c7ed1-7f64-4b9f-9680-65819d6dfb2e name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.839788937Z" level=info msg="Removing pod sandbox: e4e57fc1000d0bc7a58481ddf44d119d2c18ed749cdea2a299372c23bca2b8b7" id=e9737e7e-d508-4305-8efb-ab1917bde160 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.843766035Z" level=info msg="Removed pod sandbox: e4e57fc1000d0bc7a58481ddf44d119d2c18ed749cdea2a299372c23bca2b8b7" id=e9737e7e-d508-4305-8efb-ab1917bde160 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.844273175Z" level=info msg="Stopping pod sandbox: e26dcbf27d6dd838abdc2b73e5328f74a88881d7576e2b63d8606cfe8b072b69" id=78941531-365a-4eda-b5ab-b97e316ffc39 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.844326975Z" level=info msg="Stopped pod sandbox (already stopped): e26dcbf27d6dd838abdc2b73e5328f74a88881d7576e2b63d8606cfe8b072b69" id=78941531-365a-4eda-b5ab-b97e316ffc39 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.844806177Z" level=info msg="Removing pod sandbox: e26dcbf27d6dd838abdc2b73e5328f74a88881d7576e2b63d8606cfe8b072b69" id=d41a0bc0-5a95-414a-bab5-4c84cd604a1a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 20:48:23 functional-074555 crio[3549]: time="2025-12-01T20:48:23.848934123Z" level=info msg="Removed pod sandbox: e26dcbf27d6dd838abdc2b73e5328f74a88881d7576e2b63d8606cfe8b072b69" id=d41a0bc0-5a95-414a-bab5-4c84cd604a1a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 20:48:30 functional-074555 crio[3549]: time="2025-12-01T20:48:30.796040067Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c6f49f78-cdba-4c6f-94a1-125e485e4c81 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:48:44 functional-074555 crio[3549]: time="2025-12-01T20:48:44.795865245Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2543dd1b-63b0-46fb-8f01-e27aa6c0de80 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:48:54 functional-074555 crio[3549]: time="2025-12-01T20:48:54.796510354Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=594574ac-c668-4113-a152-f244a178e2d8 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:49:31 functional-074555 crio[3549]: time="2025-12-01T20:49:31.796423516Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ce0c2954-99d3-443b-8379-23cdc26b773e name=/runtime.v1.ImageService/PullImage
	Dec 01 20:49:36 functional-074555 crio[3549]: time="2025-12-01T20:49:36.796385993Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6ee7e73e-06c7-496d-a8aa-4839edda001c name=/runtime.v1.ImageService/PullImage
	Dec 01 20:50:58 functional-074555 crio[3549]: time="2025-12-01T20:50:58.795733655Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=726f0345-00d0-4f1e-be5e-136790213729 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:51:02 functional-074555 crio[3549]: time="2025-12-01T20:51:02.795818741Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=65b1ff2a-de7b-446e-b440-7eafb07702ea name=/runtime.v1.ImageService/PullImage
	Dec 01 20:53:48 functional-074555 crio[3549]: time="2025-12-01T20:53:48.796027699Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=162d2a82-c993-4e2e-a759-a96c89ff7a42 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:53:53 functional-074555 crio[3549]: time="2025-12-01T20:53:53.79691866Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=775e9c53-d16b-47c1-9fbb-5a113c90bebf name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b4ff920cc68a9       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   be7b220572c39       sp-pod                                      default
	a39df965f586b       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   23d3f8edd4e6e       nginx-svc                                   default
	d7776f7e2bedc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   d29f250c81834       kindnet-pms5n                               kube-system
	cd74f9dd2d884       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  10 minutes ago      Running             kube-proxy                3                   f553d0a6deae1       kube-proxy-f7bq5                            kube-system
	e35238e3fa949       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   4cf8564364818       storage-provisioner                         kube-system
	d92d2c9c0e1d8       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                  10 minutes ago      Running             kube-apiserver            0                   beb165471bef2       kube-apiserver-functional-074555            kube-system
	ce215159ff7e2       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  10 minutes ago      Running             kube-scheduler            3                   f3c780c78dd47       kube-scheduler-functional-074555            kube-system
	230a92bc129d2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  10 minutes ago      Running             kube-controller-manager   3                   b9de027f343d9       kube-controller-manager-functional-074555   kube-system
	d8416e552deaf       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  10 minutes ago      Running             etcd                      3                   aca6da77909f4       etcd-functional-074555                      kube-system
	e67a716cfc77b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   d176fdf5bf32c       coredns-66bc5c9577-557sq                    kube-system
	051828e0578b8       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  10 minutes ago      Exited              kube-proxy                2                   f553d0a6deae1       kube-proxy-f7bq5                            kube-system
	9f38ec74d26bb       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  10 minutes ago      Exited              kube-scheduler            2                   f3c780c78dd47       kube-scheduler-functional-074555            kube-system
	2202400f9cea4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Exited              kindnet-cni               2                   d29f250c81834       kindnet-pms5n                               kube-system
	145617cc4f3ba       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  10 minutes ago      Exited              kube-controller-manager   2                   b9de027f343d9       kube-controller-manager-functional-074555   kube-system
	d76f2d4ff26e5       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  10 minutes ago      Exited              etcd                      2                   aca6da77909f4       etcd-functional-074555                      kube-system
	61f3f7721fdb6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   4cf8564364818       storage-provisioner                         kube-system
	210f9d677a8cc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   d176fdf5bf32c       coredns-66bc5c9577-557sq                    kube-system
	
	
	==> coredns [210f9d677a8cce9e5a0bb8b3718e5149ae90a64c7195d468fdf6d239450e77d8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53492 - 20653 "HINFO IN 171787812661265144.5219480700688288995. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022702672s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e67a716cfc77b493b438459245f7f1fde200cc4ccb33b73e309c67c0446e4202] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40208 - 62993 "HINFO IN 1089385193221026120.5679784854954244086. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014169302s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-074555
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-074555
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=functional-074555
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_45_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:45:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-074555
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:58:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:53:14 +0000   Mon, 01 Dec 2025 20:45:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:53:14 +0000   Mon, 01 Dec 2025 20:45:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:53:14 +0000   Mon, 01 Dec 2025 20:45:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:53:14 +0000   Mon, 01 Dec 2025 20:46:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-074555
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                8585eac2-ae4c-43c9-a349-b100877ef1b4
	  Boot ID:                    06dea43b-2aa1-4726-8bb8-0a198189349a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hhgj2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-vrrp6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-557sq                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-074555                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-pms5n                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-074555             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-074555    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-f7bq5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-074555             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-074555 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-074555 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-074555 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-074555 event: Registered Node functional-074555 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-074555 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-074555 event: Registered Node functional-074555 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-074555 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-074555 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-074555 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-074555 event: Registered Node functional-074555 in Controller
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d76f2d4ff26e5155c66fbfefba0fccf9eafb624f1c6f0c1d94e366152c504604] <==
	{"level":"warn","ts":"2025-12-01T20:47:17.766303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:17.797008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:17.822100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:17.858224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:17.872453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:17.898442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:18.007477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-01T20:47:20.266775Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-01T20:47:20.266838Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-074555","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-01T20:47:20.266971Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T20:47:20.268797Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T20:47:20.268886Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T20:47:20.268909Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-01T20:47:20.268975Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-01T20:47:20.268987Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-01T20:47:20.268979Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T20:47:20.269015Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-01T20:47:20.269039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-01T20:47:20.269079Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T20:47:20.269088Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-01T20:47:20.269095Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T20:47:20.273085Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-01T20:47:20.273175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T20:47:20.273207Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-01T20:47:20.273228Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-074555","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d8416e552deafdc0fc11da73fc1658d7e72a21d7f27fbf0c787fc9b55568c02a] <==
	{"level":"warn","ts":"2025-12-01T20:47:26.544256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.555358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.572724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.590307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.607370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.625352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.642254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.678771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.694440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.712002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.735312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.748253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.772962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.795120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.824311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.838695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.856805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.881784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.919415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.949712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:26.959348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:47:27.057964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60432","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-01T20:57:25.542880Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1101}
	{"level":"info","ts":"2025-12-01T20:57:25.566014Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1101,"took":"22.744848ms","hash":2565349743,"current-db-size-bytes":3166208,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1318912,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-12-01T20:57:25.566087Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2565349743,"revision":1101,"compact-revision":-1}
	
	
	==> kernel <==
	 20:58:05 up  2:40,  0 user,  load average: 0.13, 0.37, 1.10
	Linux functional-074555 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2202400f9cea4b1e7ff435b72ae7f8675046afa822580960524bb83f9b22ee44] <==
	I1201 20:47:14.110637       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:47:14.110998       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1201 20:47:14.116407       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:47:14.116516       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:47:14.116656       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:47:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:47:14.341189       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:47:14.341274       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:47:14.341313       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:47:14.347807       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1201 20:47:18.715151       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1201 20:47:18.715306       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1201 20:47:18.715323       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1201 20:47:18.715430       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	
	
	==> kindnet [d7776f7e2bedcada56d6694fe1b0c4b8ab52dd34366d55bc13dad5c4e82fc703] <==
	I1201 20:55:59.504448       1 main.go:301] handling current node
	I1201 20:56:09.503913       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:56:09.503947       1 main.go:301] handling current node
	I1201 20:56:19.504557       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:56:19.504592       1 main.go:301] handling current node
	I1201 20:56:29.511295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:56:29.511403       1 main.go:301] handling current node
	I1201 20:56:39.510506       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:56:39.510545       1 main.go:301] handling current node
	I1201 20:56:49.509534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:56:49.509569       1 main.go:301] handling current node
	I1201 20:56:59.511227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:56:59.511261       1 main.go:301] handling current node
	I1201 20:57:09.510174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:57:09.510290       1 main.go:301] handling current node
	I1201 20:57:19.503661       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:57:19.503695       1 main.go:301] handling current node
	I1201 20:57:29.512348       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:57:29.512454       1 main.go:301] handling current node
	I1201 20:57:39.512185       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:57:39.512219       1 main.go:301] handling current node
	I1201 20:57:49.504598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:57:49.504644       1 main.go:301] handling current node
	I1201 20:57:59.511244       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 20:57:59.511280       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d92d2c9c0e1d8e1ec975d9e5ffd38f146489e14b24721068a580a5a90fdc5d89] <==
	I1201 20:47:27.997488       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1201 20:47:27.998038       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1201 20:47:28.000401       1 aggregator.go:171] initial CRD sync complete...
	I1201 20:47:28.003229       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:47:28.003335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:47:28.003372       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:47:28.001656       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 20:47:28.012269       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:47:28.036765       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:47:28.694886       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:47:28.840312       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:47:29.840480       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1201 20:47:30.067607       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:47:30.180686       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:47:30.189935       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:47:46.627677       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:47:47.028644       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.240.66"}
	I1201 20:47:47.054824       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:47:53.340466       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.187.170"}
	I1201 20:48:02.882185       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:48:03.038733       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.124.213"}
	E1201 20:48:10.281758       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56792: use of closed network connection
	E1201 20:48:19.380993       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45714: use of closed network connection
	I1201 20:48:19.596136       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.11.182"}
	I1201 20:57:27.942362       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [145617cc4f3ba4b42601cfe155f26d9e56f15add5f534b626b0702a70b9cbcea] <==
	I1201 20:47:15.769829       1 serving.go:386] Generated self-signed cert in-memory
	I1201 20:47:17.480762       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1201 20:47:17.480797       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:47:17.482383       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1201 20:47:17.482482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1201 20:47:17.483124       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1201 20:47:17.483160       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [230a92bc129d29bc856feae7a5f530a370a44ce588069c4c1b9ce179724e495d] <==
	I1201 20:47:31.395312       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1201 20:47:31.395369       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 20:47:31.395478       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1201 20:47:31.398137       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1201 20:47:31.398204       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:47:31.398211       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 20:47:31.398217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 20:47:31.401710       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1201 20:47:31.402017       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1201 20:47:31.403445       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 20:47:31.406070       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1201 20:47:31.406163       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1201 20:47:31.408979       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:47:31.410573       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 20:47:31.411008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:47:31.411292       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1201 20:47:31.414658       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1201 20:47:31.418941       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1201 20:47:31.426282       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1201 20:47:31.426364       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1201 20:47:31.427498       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1201 20:47:31.427544       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1201 20:47:31.436864       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 20:47:31.445876       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1201 20:47:31.446039       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	
	
	==> kube-proxy [051828e0578b86b5aa7384bafb77820fd3cffe481165ad39e6fcf6e4d12e60af] <==
	
	
	==> kube-proxy [cd74f9dd2d884abc75d00f345e7e6966879ae6ef1ed4fd6a87aa5495172d8590] <==
	I1201 20:47:29.243518       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:47:29.334684       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:47:29.438334       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:47:29.438371       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1201 20:47:29.438490       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:47:29.469004       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:47:29.469061       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:47:29.478686       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:47:29.479026       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:47:29.479051       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:47:29.480739       1 config.go:200] "Starting service config controller"
	I1201 20:47:29.480764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:47:29.480783       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:47:29.480787       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:47:29.480804       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:47:29.480808       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:47:29.481442       1 config.go:309] "Starting node config controller"
	I1201 20:47:29.481465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:47:29.481474       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:47:29.580946       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:47:29.580983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:47:29.581030       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9f38ec74d26bb06b9156db59706072c29aad6cd6ac01ed9cc592011aa8bd03fc] <==
	I1201 20:47:16.149571       1 serving.go:386] Generated self-signed cert in-memory
	W1201 20:47:18.647694       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 20:47:18.647732       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 20:47:18.647743       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 20:47:18.647761       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 20:47:18.731587       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1201 20:47:18.731693       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1201 20:47:18.731744       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1201 20:47:18.733970       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:47:18.734056       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:47:18.734979       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:47:18.735050       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1201 20:47:18.737748       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1201 20:47:18.737884       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:47:18.738877       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:47:18.739035       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1201 20:47:18.739058       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1201 20:47:18.739100       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1201 20:47:18.739356       1 reflector.go:205] "Failed to watch" err="Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=528&timeout=7m30s&timeoutSeconds=450&watch=true\": context canceled" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1201 20:47:18.739488       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1201 20:47:18.743439       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ce215159ff7e2d71ed233525e2c56374668fda7a442384cf005e3bb52e955831] <==
	I1201 20:47:26.473417       1 serving.go:386] Generated self-signed cert in-memory
	W1201 20:47:27.943579       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 20:47:27.943683       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 20:47:27.943716       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 20:47:27.943757       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 20:47:27.975350       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1201 20:47:27.975448       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:47:27.977606       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:47:27.977679       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:47:27.978558       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:47:27.978629       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 20:47:28.078183       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:55:17 functional-074555 kubelet[4030]: E1201 20:55:17.795383    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:55:26 functional-074555 kubelet[4030]: E1201 20:55:26.795288    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:55:29 functional-074555 kubelet[4030]: E1201 20:55:29.795565    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:55:38 functional-074555 kubelet[4030]: E1201 20:55:38.796023    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:55:42 functional-074555 kubelet[4030]: E1201 20:55:42.795907    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:55:53 functional-074555 kubelet[4030]: E1201 20:55:53.796549    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:55:54 functional-074555 kubelet[4030]: E1201 20:55:54.795333    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:56:07 functional-074555 kubelet[4030]: E1201 20:56:07.795521    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:56:08 functional-074555 kubelet[4030]: E1201 20:56:08.795421    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:56:20 functional-074555 kubelet[4030]: E1201 20:56:20.795764    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:56:21 functional-074555 kubelet[4030]: E1201 20:56:21.795321    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:56:33 functional-074555 kubelet[4030]: E1201 20:56:33.796327    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:56:34 functional-074555 kubelet[4030]: E1201 20:56:34.795125    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:56:45 functional-074555 kubelet[4030]: E1201 20:56:45.795809    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:56:47 functional-074555 kubelet[4030]: E1201 20:56:47.797178    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:56:58 functional-074555 kubelet[4030]: E1201 20:56:58.795046    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:57:01 functional-074555 kubelet[4030]: E1201 20:57:01.795436    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:57:13 functional-074555 kubelet[4030]: E1201 20:57:13.795827    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:57:16 functional-074555 kubelet[4030]: E1201 20:57:16.795825    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:57:26 functional-074555 kubelet[4030]: E1201 20:57:26.795542    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:57:31 functional-074555 kubelet[4030]: E1201 20:57:31.795307    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:57:41 functional-074555 kubelet[4030]: E1201 20:57:41.795963    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:57:45 functional-074555 kubelet[4030]: E1201 20:57:45.795707    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	Dec 01 20:57:55 functional-074555 kubelet[4030]: E1201 20:57:55.796081    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhgj2" podUID="bcb62dbe-5810-473f-8c3b-517f42ebf44d"
	Dec 01 20:57:59 functional-074555 kubelet[4030]: E1201 20:57:59.796256    4030 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vrrp6" podUID="beb37a5d-3f22-4d67-a9fb-b71a743dfc6f"
	
	
	==> storage-provisioner [61f3f7721fdb655f3e5bca108e2a930ebb4a23f52ce03164607e76f2f26350bc] <==
	I1201 20:46:45.341630       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:46:45.356364       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:46:45.357055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:46:45.359930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:46:48.815743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:46:53.076431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:46:56.676168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e35238e3fa949c5e83e4200e3084341f0acc38f89a00fa17655ed0fdb83ebdd1] <==
	W1201 20:57:41.512662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:43.516805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:43.523924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:45.527273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:45.532239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:47.534816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:47.539053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:49.542132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:49.548928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:51.551639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:51.556219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:53.559883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:53.566333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:55.568994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:55.573348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:57.575972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:57.580535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:59.583096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:57:59.587520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:58:01.590636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:58:01.597299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:58:03.603993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:58:03.613648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:58:05.622212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:58:05.628876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074555 -n functional-074555
helpers_test.go:269: (dbg) Run:  kubectl --context functional-074555 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-hhgj2 hello-node-connect-7d85dfc575-vrrp6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-074555 describe pod hello-node-75c85bcc94-hhgj2 hello-node-connect-7d85dfc575-vrrp6
helpers_test.go:290: (dbg) kubectl --context functional-074555 describe pod hello-node-75c85bcc94-hhgj2 hello-node-connect-7d85dfc575-vrrp6:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-hhgj2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074555/192.168.49.2
	Start Time:       Mon, 01 Dec 2025 20:48:19 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bzjf4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bzjf4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m46s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hhgj2 to functional-074555
	  Normal   Pulling    7m8s (x5 over 9m47s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 9m47s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 9m47s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m43s (x20 over 9m46s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m31s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-vrrp6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074555/192.168.49.2
	Start Time:       Mon, 01 Dec 2025 20:48:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j84nc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j84nc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vrrp6 to functional-074555
	  Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-074555 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-074555 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hhgj2" [bcb62dbe-5810-473f-8c3b-517f42ebf44d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1201 20:50:34.915942  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:51:02.620261  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:55:34.915772  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074555 -n functional-074555
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-01 20:58:20.037494089 +0000 UTC m=+1244.763295588
functional_test.go:1460: (dbg) Run:  kubectl --context functional-074555 describe po hello-node-75c85bcc94-hhgj2 -n default
functional_test.go:1460: (dbg) kubectl --context functional-074555 describe po hello-node-75c85bcc94-hhgj2 -n default:
Name:             hello-node-75c85bcc94-hhgj2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074555/192.168.49.2
Start Time:       Mon, 01 Dec 2025 20:48:19 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bzjf4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bzjf4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hhgj2 to functional-074555
Normal   Pulling    7m22s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m22s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m22s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-074555 logs hello-node-75c85bcc94-hhgj2 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-074555 logs hello-node-75c85bcc94-hhgj2 -n default: exit status 1 (116.827553ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hhgj2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-074555 logs hello-node-75c85bcc94-hhgj2 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 service --namespace=default --https --url hello-node: exit status 115 (527.640126ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31059
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-074555 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 service hello-node --url --format={{.IP}}: exit status 115 (591.187449ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-074555 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 service hello-node --url: exit status 115 (504.851399ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31059
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-074555 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31059
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image load --daemon kicbase/echo-server:functional-074555 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-074555" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image load --daemon kicbase/echo-server:functional-074555 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image ls
2025/12/01 20:58:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:461: expected "kicbase/echo-server:functional-074555" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-074555
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image load --daemon kicbase/echo-server:functional-074555 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-074555" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image save kicbase/echo-server:functional-074555 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1201 20:58:33.599319  514372 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:58:33.599580  514372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:58:33.599619  514372 out.go:374] Setting ErrFile to fd 2...
	I1201 20:58:33.599639  514372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:58:33.599966  514372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:58:33.600772  514372 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:58:33.601013  514372 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:58:33.601826  514372 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
	I1201 20:58:33.628116  514372 ssh_runner.go:195] Run: systemctl --version
	I1201 20:58:33.628189  514372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
	I1201 20:58:33.658871  514372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
	I1201 20:58:33.768608  514372 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1201 20:58:33.768720  514372 cache_images.go:255] Failed to load cached images for "functional-074555": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1201 20:58:33.768752  514372 cache_images.go:267] failed pushing to: functional-074555

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-074555
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image save --daemon kicbase/echo-server:functional-074555 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-074555
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-074555: exit status 1 (19.297618ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-074555

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-074555

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (509.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1201 21:00:34.916192  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:01:57.984578  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:52.881600  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:52.888908  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:52.900293  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:52.921663  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:52.963009  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:53.044581  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:53.206204  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:53.528031  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:54.170227  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:55.452032  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:02:58.013448  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:03:03.134968  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:03:13.376652  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:03:33.858133  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:04:14.820750  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:05:34.916578  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:05:36.742153  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m27.694850943s)

                                                
                                                
-- stdout --
	* [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - HTTP_PROXY=localhost:33669
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:33669 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-198694 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-198694 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000220189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000183792s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000183792s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 6 (328.175565ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1201 21:07:10.041481  521681 status.go:458] kubeconfig endpoint: get endpoint: "functional-198694" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-074555 image load --daemon kicbase/echo-server:functional-074555 --alsologtostderr                                                             │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/486002.pem                                                                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /usr/share/ca-certificates/486002.pem                                                                                      │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls                                                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image save kicbase/echo-server:functional-074555 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/4860022.pem                                                                                                 │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image rm kicbase/echo-server:functional-074555 --alsologtostderr                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /usr/share/ca-certificates/4860022.pem                                                                                     │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls                                                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image save --daemon kicbase/echo-server:functional-074555 --alsologtostderr                                                             │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ update-context │ functional-074555 update-context --alsologtostderr -v=2                                                                                                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ update-context │ functional-074555 update-context --alsologtostderr -v=2                                                                                                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ update-context │ functional-074555 update-context --alsologtostderr -v=2                                                                                                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls --format short --alsologtostderr                                                                                               │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls --format yaml --alsologtostderr                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh pgrep buildkitd                                                                                                                     │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ image          │ functional-074555 image ls --format json --alsologtostderr                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls --format table --alsologtostderr                                                                                               │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr                                                    │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls                                                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ delete         │ -p functional-074555                                                                                                                                      │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ start          │ -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:58:42
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:58:42.064571  515589 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:58:42.064776  515589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:58:42.064783  515589 out.go:374] Setting ErrFile to fd 2...
	I1201 20:58:42.064787  515589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:58:42.065184  515589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:58:42.065850  515589 out.go:368] Setting JSON to false
	I1201 20:58:42.067399  515589 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":9671,"bootTime":1764613051,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 20:58:42.067535  515589 start.go:143] virtualization:  
	I1201 20:58:42.072850  515589 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 20:58:42.076920  515589 notify.go:221] Checking for updates...
	I1201 20:58:42.076875  515589 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:58:42.081507  515589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:58:42.085661  515589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:58:42.089160  515589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 20:58:42.092585  515589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 20:58:42.096025  515589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:58:42.099690  515589 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:58:42.126613  515589 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 20:58:42.126744  515589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:58:42.213747  515589 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-01 20:58:42.202113728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:58:42.213852  515589 docker.go:319] overlay module found
	I1201 20:58:42.217792  515589 out.go:179] * Using the docker driver based on user configuration
	I1201 20:58:42.221032  515589 start.go:309] selected driver: docker
	I1201 20:58:42.221046  515589 start.go:927] validating driver "docker" against <nil>
	I1201 20:58:42.221060  515589 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:58:42.221922  515589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:58:42.280684  515589 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-01 20:58:42.27061986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:58:42.280848  515589 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 20:58:42.281087  515589 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:58:42.284275  515589 out.go:179] * Using Docker driver with root privileges
	I1201 20:58:42.287358  515589 cni.go:84] Creating CNI manager for ""
	I1201 20:58:42.287427  515589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:58:42.287435  515589 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 20:58:42.287515  515589 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:58:42.290775  515589 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 20:58:42.293895  515589 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:58:42.297156  515589 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:58:42.300198  515589 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:58:42.300277  515589 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:58:42.322806  515589 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:58:42.322818  515589 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 20:58:42.363162  515589 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 20:58:42.602367  515589 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 20:58:42.602622  515589 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:58:42.602732  515589 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 20:58:42.602734  515589 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 20:58:42.602744  515589 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 140.879µs
	I1201 20:58:42.602758  515589 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 20:58:42.602757  515589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json: {Name:mk7f03a9aade736228d88e6d0114168daa6aca3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:58:42.602768  515589 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:58:42.602864  515589 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 20:58:42.602874  515589 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 102.635µs
	I1201 20:58:42.602880  515589 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 20:58:42.602901  515589 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:58:42.602915  515589 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:58:42.602938  515589 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:58:42.602943  515589 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 20:58:42.602947  515589 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 59.207µs
	I1201 20:58:42.602953  515589 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 20:58:42.602965  515589 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:58:42.602976  515589 start.go:364] duration metric: took 30.572µs to acquireMachinesLock for "functional-198694"
	I1201 20:58:42.602997  515589 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 20:58:42.603009  515589 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 37.751µs
	I1201 20:58:42.603018  515589 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 20:58:42.603026  515589 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:58:42.602999  515589 start.go:93] Provisioning new machine with config: &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:58:42.603052  515589 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 20:58:42.603053  515589 start.go:125] createHost starting for "" (driver="docker")
	I1201 20:58:42.603056  515589 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.892µs
	I1201 20:58:42.603063  515589 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 20:58:42.603071  515589 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:58:42.603103  515589 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 20:58:42.603107  515589 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 37.497µs
	I1201 20:58:42.603111  515589 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 20:58:42.603119  515589 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:58:42.603174  515589 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 20:58:42.603183  515589 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 65.303µs
	I1201 20:58:42.603188  515589 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 20:58:42.603196  515589 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:58:42.603495  515589 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 20:58:42.603505  515589 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 308.842µs
	I1201 20:58:42.603513  515589 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 20:58:42.603525  515589 cache.go:87] Successfully saved all images to host disk.
	I1201 20:58:42.608516  515589 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1201 20:58:42.608797  515589 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:33669 to docker env.
	I1201 20:58:42.608823  515589 start.go:159] libmachine.API.Create for "functional-198694" (driver="docker")
	I1201 20:58:42.608844  515589 client.go:173] LocalClient.Create starting
	I1201 20:58:42.608914  515589 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem
	I1201 20:58:42.608946  515589 main.go:143] libmachine: Decoding PEM data...
	I1201 20:58:42.608960  515589 main.go:143] libmachine: Parsing certificate...
	I1201 20:58:42.609032  515589 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem
	I1201 20:58:42.609052  515589 main.go:143] libmachine: Decoding PEM data...
	I1201 20:58:42.609067  515589 main.go:143] libmachine: Parsing certificate...
	I1201 20:58:42.609451  515589 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1201 20:58:42.625522  515589 cli_runner.go:211] docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1201 20:58:42.625592  515589 network_create.go:284] running [docker network inspect functional-198694] to gather additional debugging logs...
	I1201 20:58:42.625611  515589 cli_runner.go:164] Run: docker network inspect functional-198694
	W1201 20:58:42.642404  515589 cli_runner.go:211] docker network inspect functional-198694 returned with exit code 1
	I1201 20:58:42.642426  515589 network_create.go:287] error running [docker network inspect functional-198694]: docker network inspect functional-198694: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-198694 not found
	I1201 20:58:42.642437  515589 network_create.go:289] output of [docker network inspect functional-198694]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-198694 not found
	
	** /stderr **
	I1201 20:58:42.642542  515589 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:58:42.659206  515589 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001895c90}
	I1201 20:58:42.659242  515589 network_create.go:124] attempt to create docker network functional-198694 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1201 20:58:42.659296  515589 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-198694 functional-198694
	I1201 20:58:42.718278  515589 network_create.go:108] docker network functional-198694 192.168.49.0/24 created
	I1201 20:58:42.718303  515589 kic.go:121] calculated static IP "192.168.49.2" for the "functional-198694" container
	I1201 20:58:42.718395  515589 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1201 20:58:42.736224  515589 cli_runner.go:164] Run: docker volume create functional-198694 --label name.minikube.sigs.k8s.io=functional-198694 --label created_by.minikube.sigs.k8s.io=true
	I1201 20:58:42.753864  515589 oci.go:103] Successfully created a docker volume functional-198694
	I1201 20:58:42.753956  515589 cli_runner.go:164] Run: docker run --rm --name functional-198694-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-198694 --entrypoint /usr/bin/test -v functional-198694:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1201 20:58:43.296103  515589 oci.go:107] Successfully prepared a docker volume functional-198694
	I1201 20:58:43.296169  515589 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1201 20:58:43.296294  515589 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1201 20:58:43.296395  515589 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1201 20:58:43.351093  515589 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-198694 --name functional-198694 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-198694 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-198694 --network functional-198694 --ip 192.168.49.2 --volume functional-198694:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1201 20:58:43.646164  515589 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Running}}
	I1201 20:58:43.667622  515589 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 20:58:43.694362  515589 cli_runner.go:164] Run: docker exec functional-198694 stat /var/lib/dpkg/alternatives/iptables
	I1201 20:58:43.751100  515589 oci.go:144] the created container "functional-198694" has a running status.
	I1201 20:58:43.751120  515589 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa...
	I1201 20:58:43.862141  515589 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1201 20:58:43.883405  515589 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 20:58:43.903177  515589 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1201 20:58:43.903189  515589 kic_runner.go:114] Args: [docker exec --privileged functional-198694 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1201 20:58:43.965028  515589 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 20:58:43.989795  515589 machine.go:94] provisionDockerMachine start ...
	I1201 20:58:43.989877  515589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 20:58:44.015414  515589 main.go:143] libmachine: Using SSH client type: native
	I1201 20:58:44.015759  515589 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 20:58:44.015766  515589 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:58:44.016494  515589 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38140->127.0.0.1:33180: read: connection reset by peer
	I1201 20:58:47.166707  515589 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 20:58:47.166721  515589 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 20:58:47.166796  515589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 20:58:47.183799  515589 main.go:143] libmachine: Using SSH client type: native
	I1201 20:58:47.184112  515589 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 20:58:47.184122  515589 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 20:58:47.341163  515589 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 20:58:47.341262  515589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 20:58:47.363415  515589 main.go:143] libmachine: Using SSH client type: native
	I1201 20:58:47.363711  515589 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 20:58:47.363725  515589 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:58:47.511317  515589 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:58:47.511336  515589 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 20:58:47.511354  515589 ubuntu.go:190] setting up certificates
	I1201 20:58:47.511367  515589 provision.go:84] configureAuth start
	I1201 20:58:47.511444  515589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 20:58:47.529208  515589 provision.go:143] copyHostCerts
	I1201 20:58:47.529266  515589 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 20:58:47.529279  515589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 20:58:47.529356  515589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 20:58:47.529456  515589 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 20:58:47.529460  515589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 20:58:47.529487  515589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 20:58:47.529552  515589 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 20:58:47.529556  515589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 20:58:47.529579  515589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 20:58:47.529632  515589 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 20:58:47.881193  515589 provision.go:177] copyRemoteCerts
	I1201 20:58:47.881249  515589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:58:47.881287  515589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 20:58:47.899247  515589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 20:58:48.002655  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:58:48.025560  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 20:58:48.044362  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:58:48.063297  515589 provision.go:87] duration metric: took 551.907084ms to configureAuth
	I1201 20:58:48.063315  515589 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:58:48.063520  515589 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:58:48.063627  515589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 20:58:48.081791  515589 main.go:143] libmachine: Using SSH client type: native
	I1201 20:58:48.082119  515589 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 20:58:48.082131  515589 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:58:48.382026  515589 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:58:48.382038  515589 machine.go:97] duration metric: took 4.392231474s to provisionDockerMachine
	I1201 20:58:48.382046  515589 client.go:176] duration metric: took 5.773198394s to LocalClient.Create
	I1201 20:58:48.382058  515589 start.go:167] duration metric: took 5.773237639s to libmachine.API.Create "functional-198694"
	I1201 20:58:48.382065  515589 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 20:58:48.382085  515589 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:58:48.382150  515589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:58:48.382196  515589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 20:58:48.399264  515589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 20:58:48.502891  515589 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:58:48.506088  515589 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:58:48.506105  515589 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:58:48.506115  515589 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 20:58:48.506170  515589 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 20:58:48.506257  515589 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 20:58:48.506343  515589 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 20:58:48.506386  515589 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 20:58:48.513822  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 20:58:48.530639  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 20:58:48.548518  515589 start.go:296] duration metric: took 166.440036ms for postStartSetup
	I1201 20:58:48.548893  515589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 20:58:48.566178  515589 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 20:58:48.566453  515589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:58:48.566490  515589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 20:58:48.586516  515589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 20:58:48.688200  515589 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:58:48.692735  515589 start.go:128] duration metric: took 6.08966822s to createHost
	I1201 20:58:48.692750  515589 start.go:83] releasing machines lock for "functional-198694", held for 6.089767704s
	I1201 20:58:48.692820  515589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 20:58:48.713453  515589 out.go:179] * Found network options:
	I1201 20:58:48.716392  515589 out.go:179]   - HTTP_PROXY=localhost:33669
	W1201 20:58:48.719191  515589 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1201 20:58:48.722112  515589 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1201 20:58:48.725036  515589 ssh_runner.go:195] Run: cat /version.json
	I1201 20:58:48.725080  515589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 20:58:48.725104  515589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:58:48.725181  515589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 20:58:48.744465  515589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 20:58:48.746212  515589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 20:58:48.939228  515589 ssh_runner.go:195] Run: systemctl --version
	I1201 20:58:48.945518  515589 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:58:48.981946  515589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:58:48.986165  515589 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:58:48.986239  515589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:58:49.014974  515589 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1201 20:58:49.014987  515589 start.go:496] detecting cgroup driver to use...
	I1201 20:58:49.015018  515589 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 20:58:49.015069  515589 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:58:49.032825  515589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:58:49.045908  515589 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:58:49.045963  515589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:58:49.063359  515589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:58:49.081852  515589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:58:49.202420  515589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:58:49.331608  515589 docker.go:234] disabling docker service ...
	I1201 20:58:49.331668  515589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:58:49.353763  515589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:58:49.368376  515589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:58:49.495253  515589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:58:49.613906  515589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:58:49.626783  515589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:58:49.643479  515589 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:58:49.643538  515589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:58:49.653780  515589 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:58:49.653866  515589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:58:49.663457  515589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:58:49.672581  515589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:58:49.681725  515589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:58:49.690124  515589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:58:49.698829  515589 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:58:49.712366  515589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:58:49.721115  515589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:58:49.728673  515589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:58:49.736336  515589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:58:49.865573  515589 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:58:50.050394  515589 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:58:50.050470  515589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:58:50.054658  515589 start.go:564] Will wait 60s for crictl version
	I1201 20:58:50.054724  515589 ssh_runner.go:195] Run: which crictl
	I1201 20:58:50.058621  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:58:50.085115  515589 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:58:50.085207  515589 ssh_runner.go:195] Run: crio --version
	I1201 20:58:50.114660  515589 ssh_runner.go:195] Run: crio --version
	I1201 20:58:50.146893  515589 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 20:58:50.149750  515589 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:58:50.165909  515589 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 20:58:50.169609  515589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:58:50.179709  515589 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:58:50.179819  515589 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:58:50.179863  515589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:58:50.204531  515589 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1201 20:58:50.204544  515589 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1201 20:58:50.204593  515589 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:58:50.204819  515589 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 20:58:50.204919  515589 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 20:58:50.205001  515589 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 20:58:50.205081  515589 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 20:58:50.205176  515589 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1201 20:58:50.205273  515589 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1201 20:58:50.205376  515589 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1201 20:58:50.206486  515589 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 20:58:50.206954  515589 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 20:58:50.207095  515589 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1201 20:58:50.207412  515589 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1201 20:58:50.207486  515589 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1201 20:58:50.207533  515589 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 20:58:50.207581  515589 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 20:58:50.207629  515589 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:58:50.532690  515589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1201 20:58:50.552519  515589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 20:58:50.554743  515589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 20:58:50.555312  515589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1201 20:58:50.565396  515589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1201 20:58:50.566809  515589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 20:58:50.588651  515589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 20:58:50.620859  515589 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1201 20:58:50.620891  515589 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1201 20:58:50.620949  515589 ssh_runner.go:195] Run: which crictl
	I1201 20:58:50.621031  515589 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1201 20:58:50.621044  515589 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 20:58:50.621065  515589 ssh_runner.go:195] Run: which crictl
	I1201 20:58:50.691952  515589 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1201 20:58:50.691983  515589 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 20:58:50.692040  515589 ssh_runner.go:195] Run: which crictl
	I1201 20:58:50.706787  515589 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1201 20:58:50.706822  515589 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1201 20:58:50.706866  515589 ssh_runner.go:195] Run: which crictl
	I1201 20:58:50.714776  515589 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1201 20:58:50.714789  515589 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1201 20:58:50.714804  515589 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1201 20:58:50.714808  515589 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 20:58:50.714849  515589 ssh_runner.go:195] Run: which crictl
	I1201 20:58:50.714856  515589 ssh_runner.go:195] Run: which crictl
	I1201 20:58:50.714901  515589 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1201 20:58:50.714924  515589 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 20:58:50.714926  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 20:58:50.714946  515589 ssh_runner.go:195] Run: which crictl
	I1201 20:58:50.714967  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1201 20:58:50.715010  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 20:58:50.715048  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1201 20:58:50.784771  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1201 20:58:50.784833  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 20:58:50.784874  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1201 20:58:50.784918  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 20:58:50.784972  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 20:58:50.785017  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1201 20:58:50.785064  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 20:58:50.881832  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 20:58:50.881894  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1201 20:58:50.881938  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1201 20:58:50.902215  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 20:58:50.902285  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1201 20:58:50.902335  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 20:58:50.909016  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 20:58:50.995475  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1201 20:58:50.995544  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 20:58:50.995591  515589 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1201 20:58:50.995654  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1201 20:58:51.005159  515589 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1201 20:58:51.005265  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1201 20:58:51.005349  515589 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1201 20:58:51.005389  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1201 20:58:51.005430  515589 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1201 20:58:51.005468  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1201 20:58:51.015730  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 20:58:51.066046  515589 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1201 20:58:51.066074  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1201 20:58:51.066135  515589 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1201 20:58:51.066206  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1201 20:58:51.066244  515589 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1201 20:58:51.066281  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1201 20:58:51.066329  515589 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1201 20:58:51.066339  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1201 20:58:51.066380  515589 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1201 20:58:51.066388  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1201 20:58:51.066419  515589 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1201 20:58:51.066428  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1201 20:58:51.080683  515589 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1201 20:58:51.080778  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1201 20:58:51.123444  515589 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1201 20:58:51.123477  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1201 20:58:51.123524  515589 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1201 20:58:51.123534  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1201 20:58:51.147834  515589 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1201 20:58:51.147898  515589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1201 20:58:51.206720  515589 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1201 20:58:51.206746  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	W1201 20:58:51.460200  515589 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1201 20:58:51.460356  515589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:58:51.469132  515589 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1201 20:58:51.616917  515589 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1201 20:58:51.616961  515589 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:58:51.617121  515589 ssh_runner.go:195] Run: which crictl
	I1201 20:58:51.650996  515589 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1201 20:58:51.651069  515589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1201 20:58:51.692135  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:58:53.202637  515589 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.551535797s)
	I1201 20:58:53.202655  515589 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1201 20:58:53.202667  515589 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.510512173s)
	I1201 20:58:53.202673  515589 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1201 20:58:53.202725  515589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1201 20:58:53.202726  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:58:54.376631  515589 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.17388417s)
	I1201 20:58:54.376647  515589 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1201 20:58:54.376666  515589 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.17392435s)
	I1201 20:58:54.376675  515589 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1201 20:58:54.376727  515589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1201 20:58:54.376726  515589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:58:54.410011  515589 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1201 20:58:54.410131  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1201 20:58:55.682306  515589 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.305558703s)
	I1201 20:58:55.682335  515589 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1201 20:58:55.682343  515589 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.272198036s)
	I1201 20:58:55.682355  515589 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1201 20:58:55.682371  515589 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1201 20:58:55.682395  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1201 20:58:55.682410  515589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1201 20:58:56.878184  515589 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.195752653s)
	I1201 20:58:56.878213  515589 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1201 20:58:56.878236  515589 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1201 20:58:56.878296  515589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1201 20:58:58.203275  515589 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.32495697s)
	I1201 20:58:58.203291  515589 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1201 20:58:58.203308  515589 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1201 20:58:58.203359  515589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1201 20:59:00.135787  515589 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.932403603s)
	I1201 20:59:00.135806  515589 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1201 20:59:00.135847  515589 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1201 20:59:00.135916  515589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1201 20:59:00.808514  515589 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1201 20:59:00.808541  515589 cache_images.go:125] Successfully loaded all cached images
	I1201 20:59:00.808545  515589 cache_images.go:94] duration metric: took 10.603991231s to LoadCachedImages
	I1201 20:59:00.808555  515589 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 20:59:00.808644  515589 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:59:00.808724  515589 ssh_runner.go:195] Run: crio config
	I1201 20:59:00.865886  515589 cni.go:84] Creating CNI manager for ""
	I1201 20:59:00.865897  515589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:59:00.865920  515589 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:59:00.865941  515589 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:59:00.866053  515589 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:59:00.866130  515589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:59:00.874004  515589 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1201 20:59:00.874074  515589 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:59:00.881874  515589 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1201 20:59:00.881956  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1201 20:59:00.882037  515589 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1201 20:59:00.882068  515589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:59:00.882153  515589 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1201 20:59:00.882196  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1201 20:59:00.887104  515589 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1201 20:59:00.887206  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1201 20:59:00.902114  515589 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1201 20:59:00.902148  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1201 20:59:00.902264  515589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1201 20:59:00.933943  515589 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1201 20:59:00.933973  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1201 20:59:01.798809  515589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:59:01.807206  515589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:59:01.821018  515589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:59:01.835061  515589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1201 20:59:01.850283  515589 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:59:01.854968  515589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:59:01.866313  515589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:59:01.985650  515589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:59:02.010633  515589 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 20:59:02.010646  515589 certs.go:195] generating shared ca certs ...
	I1201 20:59:02.010663  515589 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:59:02.010857  515589 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 20:59:02.010929  515589 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 20:59:02.010936  515589 certs.go:257] generating profile certs ...
	I1201 20:59:02.011009  515589 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 20:59:02.011026  515589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt with IP's: []
	I1201 20:59:02.208139  515589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt ...
	I1201 20:59:02.208155  515589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: {Name:mk6ec124c3e47ed73c0ba6bb7136afe24dd34aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:59:02.208366  515589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key ...
	I1201 20:59:02.208373  515589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key: {Name:mk72a5cad3f8e223b655cec400fc91fb583e6eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:59:02.208473  515589 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 20:59:02.208485  515589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt.ab5f5a28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1201 20:59:02.494517  515589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt.ab5f5a28 ...
	I1201 20:59:02.494533  515589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt.ab5f5a28: {Name:mk83c5fe4dd18dd83894f954fea1f31a95b4ac8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:59:02.494729  515589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28 ...
	I1201 20:59:02.494736  515589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28: {Name:mk6b4bcf5025c0d3d509d2f48a7ff505dd4fcabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:59:02.494817  515589 certs.go:382] copying /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt.ab5f5a28 -> /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt
	I1201 20:59:02.494897  515589 certs.go:386] copying /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28 -> /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key
	I1201 20:59:02.494954  515589 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 20:59:02.494965  515589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt with IP's: []
	I1201 20:59:02.700002  515589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt ...
	I1201 20:59:02.700019  515589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt: {Name:mk9260d839191bb24e8217c7fddf516ed55fa1f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:59:02.700211  515589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key ...
	I1201 20:59:02.700219  515589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key: {Name:mke8bc0e1f835e317197bb958582698a5bef079a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:59:02.700414  515589 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 20:59:02.700455  515589 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 20:59:02.700462  515589 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:59:02.700490  515589 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:59:02.700513  515589 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:59:02.700539  515589 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 20:59:02.700582  515589 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 20:59:02.701486  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:59:02.724334  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:59:02.743098  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:59:02.761300  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 20:59:02.779664  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:59:02.797581  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:59:02.814905  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:59:02.833359  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:59:02.852539  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:59:02.871326  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 20:59:02.890100  515589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 20:59:02.909406  515589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:59:02.924329  515589 ssh_runner.go:195] Run: openssl version
	I1201 20:59:02.931436  515589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:59:02.940358  515589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:59:02.944957  515589 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:59:02.945028  515589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:59:02.990781  515589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:59:02.999526  515589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 20:59:03.009293  515589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 20:59:03.014156  515589 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 20:59:03.014214  515589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 20:59:03.056466  515589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 20:59:03.065319  515589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 20:59:03.074113  515589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 20:59:03.078263  515589 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 20:59:03.078323  515589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 20:59:03.119904  515589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:59:03.129722  515589 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:59:03.133667  515589 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:59:03.133715  515589 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:59:03.133783  515589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:59:03.133847  515589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:59:03.160909  515589 cri.go:89] found id: ""
	I1201 20:59:03.160978  515589 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:59:03.169221  515589 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:59:03.177585  515589 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 20:59:03.177638  515589 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:59:03.186029  515589 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:59:03.186041  515589 kubeadm.go:158] found existing configuration files:
	
	I1201 20:59:03.186094  515589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 20:59:03.194531  515589 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:59:03.194602  515589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:59:03.202550  515589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 20:59:03.210843  515589 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:59:03.210910  515589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:59:03.218711  515589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 20:59:03.226919  515589 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:59:03.226985  515589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:59:03.235101  515589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 20:59:03.243311  515589 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:59:03.243375  515589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:59:03.251235  515589 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 20:59:03.358758  515589 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 20:59:03.359230  515589 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 20:59:03.422657  515589 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:03:06.560679  515589 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1201 21:03:06.560702  515589 kubeadm.go:319] 
	I1201 21:03:06.560771  515589 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 21:03:06.564141  515589 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:03:06.564190  515589 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:03:06.564277  515589 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:03:06.564328  515589 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:03:06.564360  515589 kubeadm.go:319] OS: Linux
	I1201 21:03:06.564401  515589 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:03:06.564445  515589 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:03:06.564488  515589 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:03:06.564532  515589 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:03:06.564576  515589 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:03:06.564621  515589 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:03:06.564662  515589 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:03:06.564706  515589 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:03:06.564749  515589 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:03:06.564816  515589 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:03:06.564903  515589 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:03:06.564987  515589 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:03:06.565045  515589 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:03:06.568258  515589 out.go:252]   - Generating certificates and keys ...
	I1201 21:03:06.568337  515589 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:03:06.568405  515589 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:03:06.568474  515589 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 21:03:06.568530  515589 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 21:03:06.568589  515589 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 21:03:06.568638  515589 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 21:03:06.568690  515589 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 21:03:06.568810  515589 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-198694 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 21:03:06.568864  515589 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 21:03:06.569037  515589 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-198694 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 21:03:06.569105  515589 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 21:03:06.569164  515589 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 21:03:06.569204  515589 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 21:03:06.569267  515589 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:03:06.569319  515589 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:03:06.569374  515589 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:03:06.569428  515589 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:03:06.569490  515589 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:03:06.569568  515589 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:03:06.569687  515589 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:03:06.569763  515589 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:03:06.572631  515589 out.go:252]   - Booting up control plane ...
	I1201 21:03:06.572723  515589 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:03:06.572796  515589 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:03:06.572863  515589 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:03:06.573003  515589 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:03:06.573106  515589 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:03:06.573222  515589 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:03:06.573307  515589 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:03:06.573343  515589 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:03:06.573470  515589 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:03:06.573588  515589 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:03:06.573668  515589 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000220189s
	I1201 21:03:06.573676  515589 kubeadm.go:319] 
	I1201 21:03:06.573737  515589 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:03:06.573770  515589 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:03:06.573877  515589 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:03:06.573882  515589 kubeadm.go:319] 
	I1201 21:03:06.573994  515589 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:03:06.574024  515589 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:03:06.574051  515589 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:03:06.574120  515589 kubeadm.go:319] 
	W1201 21:03:06.574180  515589 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-198694 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-198694 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000220189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1201 21:03:06.574264  515589 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:03:06.979864  515589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:03:06.993331  515589 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:03:06.993389  515589 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:03:07.001562  515589 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:03:07.001570  515589 kubeadm.go:158] found existing configuration files:
	
	I1201 21:03:07.001624  515589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:03:07.010943  515589 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:03:07.011005  515589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:03:07.019232  515589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:03:07.027633  515589 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:03:07.027690  515589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:03:07.035669  515589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:03:07.043631  515589 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:03:07.043685  515589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:03:07.051719  515589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:03:07.060207  515589 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:03:07.060268  515589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:03:07.068003  515589 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:03:07.108144  515589 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:03:07.108564  515589 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:03:07.178840  515589 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:03:07.178913  515589 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:03:07.178954  515589 kubeadm.go:319] OS: Linux
	I1201 21:03:07.179001  515589 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:03:07.179053  515589 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:03:07.179099  515589 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:03:07.179223  515589 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:03:07.179282  515589 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:03:07.179351  515589 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:03:07.179396  515589 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:03:07.179492  515589 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:03:07.179548  515589 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:03:07.250256  515589 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:03:07.250360  515589 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:03:07.250463  515589 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:03:07.267568  515589 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:03:07.272961  515589 out.go:252]   - Generating certificates and keys ...
	I1201 21:03:07.273069  515589 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:03:07.273155  515589 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:03:07.273251  515589 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:03:07.273323  515589 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:03:07.273404  515589 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:03:07.273467  515589 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:03:07.273540  515589 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:03:07.273612  515589 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:03:07.273697  515589 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:03:07.273780  515589 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:03:07.273821  515589 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:03:07.273885  515589 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:03:08.057415  515589 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:03:08.214539  515589 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:03:08.504224  515589 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:03:08.759492  515589 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:03:09.079825  515589 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:03:09.080559  515589 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:03:09.085085  515589 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:03:09.088296  515589 out.go:252]   - Booting up control plane ...
	I1201 21:03:09.088398  515589 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:03:09.088477  515589 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:03:09.088543  515589 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:03:09.102904  515589 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:03:09.103026  515589 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:03:09.110664  515589 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:03:09.110915  515589 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:03:09.111160  515589 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:03:09.239056  515589 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:03:09.239245  515589 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:07:09.239434  515589 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000183792s
	I1201 21:07:09.239451  515589 kubeadm.go:319] 
	I1201 21:07:09.239508  515589 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:07:09.239539  515589 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:07:09.239650  515589 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:07:09.239653  515589 kubeadm.go:319] 
	I1201 21:07:09.239771  515589 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:07:09.239816  515589 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:07:09.239846  515589 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:07:09.239849  515589 kubeadm.go:319] 
	I1201 21:07:09.244443  515589 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:07:09.244887  515589 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:07:09.245001  515589 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:07:09.245264  515589 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1201 21:07:09.245268  515589 kubeadm.go:319] 
	I1201 21:07:09.245388  515589 kubeadm.go:403] duration metric: took 8m6.111679242s to StartCluster
	I1201 21:07:09.245421  515589 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:07:09.245482  515589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:07:09.245547  515589 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 21:07:09.271039  515589 cri.go:89] found id: ""
	I1201 21:07:09.271054  515589 logs.go:282] 0 containers: []
	W1201 21:07:09.271061  515589 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:07:09.271067  515589 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:07:09.271127  515589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:07:09.298109  515589 cri.go:89] found id: ""
	I1201 21:07:09.298124  515589 logs.go:282] 0 containers: []
	W1201 21:07:09.298131  515589 logs.go:284] No container was found matching "etcd"
	I1201 21:07:09.298136  515589 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:07:09.298192  515589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:07:09.323510  515589 cri.go:89] found id: ""
	I1201 21:07:09.323524  515589 logs.go:282] 0 containers: []
	W1201 21:07:09.323531  515589 logs.go:284] No container was found matching "coredns"
	I1201 21:07:09.323536  515589 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:07:09.323594  515589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:07:09.353666  515589 cri.go:89] found id: ""
	I1201 21:07:09.353681  515589 logs.go:282] 0 containers: []
	W1201 21:07:09.353691  515589 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:07:09.353700  515589 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:07:09.353767  515589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:07:09.384071  515589 cri.go:89] found id: ""
	I1201 21:07:09.384085  515589 logs.go:282] 0 containers: []
	W1201 21:07:09.384092  515589 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:07:09.384097  515589 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:07:09.384155  515589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:07:09.416809  515589 cri.go:89] found id: ""
	I1201 21:07:09.416823  515589 logs.go:282] 0 containers: []
	W1201 21:07:09.416830  515589 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:07:09.416835  515589 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:07:09.416895  515589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:07:09.449944  515589 cri.go:89] found id: ""
	I1201 21:07:09.449958  515589 logs.go:282] 0 containers: []
	W1201 21:07:09.449965  515589 logs.go:284] No container was found matching "kindnet"
	I1201 21:07:09.449973  515589 logs.go:123] Gathering logs for kubelet ...
	I1201 21:07:09.449984  515589 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:07:09.524971  515589 logs.go:123] Gathering logs for dmesg ...
	I1201 21:07:09.524991  515589 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:07:09.541343  515589 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:07:09.541359  515589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:07:09.607026  515589 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:07:09.598735    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:09.599395    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:09.601106    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:09.601752    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:09.603490    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:07:09.598735    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:09.599395    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:09.601106    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:09.601752    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:09.603490    5462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:07:09.607037  515589 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:07:09.607047  515589 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:07:09.648782  515589 logs.go:123] Gathering logs for container status ...
	I1201 21:07:09.648800  515589 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1201 21:07:09.678581  515589 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000183792s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1201 21:07:09.678614  515589 out.go:285] * 
	W1201 21:07:09.678672  515589 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000183792s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:07:09.678693  515589 out.go:285] * 
	W1201 21:07:09.680942  515589 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:07:09.685860  515589 out.go:203] 
	W1201 21:07:09.689556  515589 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000183792s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:07:09.689599  515589 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1201 21:07:09.689642  515589 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1201 21:07:09.693024  515589 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 20:58:51 functional-198694 crio[843]: time="2025-12-01T20:58:51.062386371Z" level=info msg="Image registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 not found" id=172db9fd-129b-4a50-88de-c27a6f03cda0 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:51 functional-198694 crio[843]: time="2025-12-01T20:58:51.062427716Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 found" id=172db9fd-129b-4a50-88de-c27a6f03cda0 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:51 functional-198694 crio[843]: time="2025-12-01T20:58:51.800336251Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=aa9b4ff3-6f15-4011-a008-405d7decf720 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:51 functional-198694 crio[843]: time="2025-12-01T20:58:51.800797722Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=aa9b4ff3-6f15-4011-a008-405d7decf720 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:51 functional-198694 crio[843]: time="2025-12-01T20:58:51.800841652Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=aa9b4ff3-6f15-4011-a008-405d7decf720 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:53 functional-198694 crio[843]: time="2025-12-01T20:58:53.226813913Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=714e47b9-0c14-4988-bc94-009008f4cbe7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:53 functional-198694 crio[843]: time="2025-12-01T20:58:53.227151399Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=714e47b9-0c14-4988-bc94-009008f4cbe7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:53 functional-198694 crio[843]: time="2025-12-01T20:58:53.227212379Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=714e47b9-0c14-4988-bc94-009008f4cbe7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:54 functional-198694 crio[843]: time="2025-12-01T20:58:54.406400253Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ed86ad4e-69a6-481c-8d53-509eeb9b9a64 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:54 functional-198694 crio[843]: time="2025-12-01T20:58:54.406757399Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=ed86ad4e-69a6-481c-8d53-509eeb9b9a64 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:58:54 functional-198694 crio[843]: time="2025-12-01T20:58:54.406815604Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=ed86ad4e-69a6-481c-8d53-509eeb9b9a64 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:59:03 functional-198694 crio[843]: time="2025-12-01T20:59:03.427452983Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3c574fa4-e102-401e-8f1e-c3482dffae2b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:59:03 functional-198694 crio[843]: time="2025-12-01T20:59:03.430470099Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a1c8e194-e0d1-4692-b9e3-2ff6a0ba219d name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:59:03 functional-198694 crio[843]: time="2025-12-01T20:59:03.432095287Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=afa7dfc0-7316-4f8a-83a8-1282f5ffa339 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:59:03 functional-198694 crio[843]: time="2025-12-01T20:59:03.433657354Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5f2596f8-24b9-49d9-ab95-a7bf16b58bac name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:59:03 functional-198694 crio[843]: time="2025-12-01T20:59:03.434695165Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=dfb2af42-5a47-4c76-9a91-05fff5e589ef name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:59:03 functional-198694 crio[843]: time="2025-12-01T20:59:03.436290593Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a2e7fe16-4796-4cd4-b0b3-6624ce159411 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:59:03 functional-198694 crio[843]: time="2025-12-01T20:59:03.43717987Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5ca903d3-2858-4269-9bbf-75187e3e374a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:03:07 functional-198694 crio[843]: time="2025-12-01T21:03:07.253515103Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d853c451-2329-4669-9679-f3b105d5b9d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:03:07 functional-198694 crio[843]: time="2025-12-01T21:03:07.255421909Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=117c0656-fa10-4d94-a626-6b05558db802 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:03:07 functional-198694 crio[843]: time="2025-12-01T21:03:07.256974606Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=da034152-ec65-4a06-8214-dc1c03612d6a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:03:07 functional-198694 crio[843]: time="2025-12-01T21:03:07.258430804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=640d3505-767f-435e-bb5a-d379e08b9d5c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:03:07 functional-198694 crio[843]: time="2025-12-01T21:03:07.25943276Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=376a7bcf-4344-4f71-a4fc-dd18146ba9a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:03:07 functional-198694 crio[843]: time="2025-12-01T21:03:07.260908666Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f617128d-1269-4465-95c0-23211fa820b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:03:07 functional-198694 crio[843]: time="2025-12-01T21:03:07.261783478Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=68b37215-5fab-403a-935a-982dd29e0344 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:07:10.683668    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:10.684362    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:10.685420    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:10.686094    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:07:10.687650    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:07:10 up  2:49,  0 user,  load average: 0.16, 0.26, 0.74
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:07:07 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:07:08 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 643.
	Dec 01 21:07:08 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:07:08 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:07:08 functional-198694 kubelet[5390]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:07:08 functional-198694 kubelet[5390]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:07:08 functional-198694 kubelet[5390]: E1201 21:07:08.700087    5390 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:07:08 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:07:08 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:07:09 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 644.
	Dec 01 21:07:09 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:07:09 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:07:09 functional-198694 kubelet[5440]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:07:09 functional-198694 kubelet[5440]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:07:09 functional-198694 kubelet[5440]: E1201 21:07:09.468821    5440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:07:09 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:07:09 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:07:10 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Dec 01 21:07:10 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:07:10 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:07:10 functional-198694 kubelet[5494]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:07:10 functional-198694 kubelet[5494]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:07:10 functional-198694 kubelet[5494]: E1201 21:07:10.213208    5494 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:07:10 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:07:10 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 6 (363.256076ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1201 21:07:11.180988  521891 status.go:458] kubeconfig endpoint: get endpoint: "functional-198694" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (509.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1201 21:07:11.197760  486002 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-198694 --alsologtostderr -v=8
E1201 21:07:52.876746  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:08:20.583640  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:10:34.916012  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:12:52.876990  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-198694 --alsologtostderr -v=8: exit status 80 (6m5.983088287s)

                                                
                                                
-- stdout --
	* [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 21:07:11.242920  521964 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:07:11.243351  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243387  521964 out.go:374] Setting ErrFile to fd 2...
	I1201 21:07:11.243410  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243711  521964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:07:11.244177  521964 out.go:368] Setting JSON to false
	I1201 21:07:11.245066  521964 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10181,"bootTime":1764613051,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:07:11.245167  521964 start.go:143] virtualization:  
	I1201 21:07:11.248721  521964 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:07:11.252584  521964 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:07:11.252676  521964 notify.go:221] Checking for updates...
	I1201 21:07:11.258436  521964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:07:11.261368  521964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:11.264327  521964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:07:11.267307  521964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:07:11.270189  521964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:07:11.273718  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:11.273862  521964 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:07:11.298213  521964 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:07:11.298331  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.359645  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.34998497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.359790  521964 docker.go:319] overlay module found
	I1201 21:07:11.364655  521964 out.go:179] * Using the docker driver based on existing profile
	I1201 21:07:11.367463  521964 start.go:309] selected driver: docker
	I1201 21:07:11.367488  521964 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.367603  521964 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:07:11.367700  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.423386  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.414394313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.423798  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:11.423867  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:11.423916  521964 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.427203  521964 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:07:11.430063  521964 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:07:11.433025  521964 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:07:11.436022  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:11.436110  521964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:07:11.455717  521964 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:07:11.455744  521964 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:07:11.500566  521964 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:07:11.687123  521964 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:07:11.687287  521964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:07:11.687539  521964 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:07:11.687581  521964 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.687647  521964 start.go:364] duration metric: took 33.501µs to acquireMachinesLock for "functional-198694"
	I1201 21:07:11.687664  521964 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:07:11.687669  521964 fix.go:54] fixHost starting: 
	I1201 21:07:11.687932  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:11.688204  521964 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688271  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:07:11.688285  521964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.581µs
	I1201 21:07:11.688306  521964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:07:11.688318  521964 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688354  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:07:11.688367  521964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 50.575µs
	I1201 21:07:11.688373  521964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688390  521964 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688439  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:07:11.688445  521964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 57.213µs
	I1201 21:07:11.688452  521964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688467  521964 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688503  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:07:11.688513  521964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.581µs
	I1201 21:07:11.688520  521964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688529  521964 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688566  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:07:11.688576  521964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 47.712µs
	I1201 21:07:11.688582  521964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688591  521964 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688628  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:07:11.688637  521964 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 46.916µs
	I1201 21:07:11.688643  521964 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:07:11.688652  521964 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688684  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:07:11.688693  521964 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 41.952µs
	I1201 21:07:11.688698  521964 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:07:11.688707  521964 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688742  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:07:11.688749  521964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 43.527µs
	I1201 21:07:11.688755  521964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:07:11.688763  521964 cache.go:87] Successfully saved all images to host disk.
	I1201 21:07:11.706210  521964 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:07:11.706244  521964 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:07:11.709560  521964 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:07:11.709599  521964 machine.go:94] provisionDockerMachine start ...
	I1201 21:07:11.709692  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.727308  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.727671  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.727690  521964 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:07:11.874686  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:11.874711  521964 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:07:11.874786  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.892845  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.893165  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.893181  521964 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:07:12.052942  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:12.053034  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.072030  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.072356  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.072379  521964 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:07:12.227676  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:07:12.227702  521964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:07:12.227769  521964 ubuntu.go:190] setting up certificates
	I1201 21:07:12.227787  521964 provision.go:84] configureAuth start
	I1201 21:07:12.227860  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:12.247353  521964 provision.go:143] copyHostCerts
	I1201 21:07:12.247405  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247445  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:07:12.247463  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247541  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:07:12.247639  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247660  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:07:12.247665  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247698  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:07:12.247755  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247776  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:07:12.247785  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247814  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:07:12.247874  521964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:07:12.352949  521964 provision.go:177] copyRemoteCerts
	I1201 21:07:12.353031  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:07:12.353075  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.373178  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:12.479006  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1201 21:07:12.479125  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:07:12.496931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1201 21:07:12.497043  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:07:12.515649  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1201 21:07:12.515717  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 21:07:12.533930  521964 provision.go:87] duration metric: took 306.12888ms to configureAuth
	I1201 21:07:12.533957  521964 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:07:12.534156  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:12.534262  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.551972  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.552286  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.552304  521964 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:07:12.889959  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:07:12.889981  521964 machine.go:97] duration metric: took 1.180373916s to provisionDockerMachine
	I1201 21:07:12.889993  521964 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:07:12.890006  521964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:07:12.890086  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:07:12.890139  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.908762  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.018597  521964 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:07:13.022335  521964 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1201 21:07:13.022369  521964 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1201 21:07:13.022376  521964 command_runner.go:130] > VERSION_ID="12"
	I1201 21:07:13.022381  521964 command_runner.go:130] > VERSION="12 (bookworm)"
	I1201 21:07:13.022386  521964 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1201 21:07:13.022390  521964 command_runner.go:130] > ID=debian
	I1201 21:07:13.022396  521964 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1201 21:07:13.022401  521964 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1201 21:07:13.022407  521964 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1201 21:07:13.022493  521964 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:07:13.022513  521964 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:07:13.022526  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:07:13.022584  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:07:13.022685  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:07:13.022696  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /etc/ssl/certs/4860022.pem
	I1201 21:07:13.022772  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:07:13.022784  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> /etc/test/nested/copy/486002/hosts
	I1201 21:07:13.022828  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:07:13.031305  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:13.050359  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:07:13.069098  521964 start.go:296] duration metric: took 179.090292ms for postStartSetup
	I1201 21:07:13.069200  521964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:07:13.069250  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.087931  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.188150  521964 command_runner.go:130] > 18%
	I1201 21:07:13.188720  521964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:07:13.193507  521964 command_runner.go:130] > 161G
	I1201 21:07:13.195867  521964 fix.go:56] duration metric: took 1.508190835s for fixHost
	I1201 21:07:13.195933  521964 start.go:83] releasing machines lock for "functional-198694", held for 1.508273853s
	I1201 21:07:13.196019  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:13.216611  521964 ssh_runner.go:195] Run: cat /version.json
	I1201 21:07:13.216667  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.216936  521964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:07:13.216990  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.238266  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.249198  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.342561  521964 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1201 21:07:13.342766  521964 ssh_runner.go:195] Run: systemctl --version
	I1201 21:07:13.434302  521964 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1201 21:07:13.434432  521964 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1201 21:07:13.434476  521964 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1201 21:07:13.434562  521964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:07:13.473148  521964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1201 21:07:13.477954  521964 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1201 21:07:13.478007  521964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:07:13.478081  521964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:07:13.486513  521964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:07:13.486536  521964 start.go:496] detecting cgroup driver to use...
	I1201 21:07:13.486599  521964 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:07:13.486671  521964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:07:13.502588  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:07:13.515851  521964 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:07:13.515935  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:07:13.531981  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:07:13.545612  521964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:07:13.660013  521964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:07:13.783921  521964 docker.go:234] disabling docker service ...
	I1201 21:07:13.783999  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:07:13.801145  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:07:13.814790  521964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:07:13.959260  521964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:07:14.082027  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:07:14.096899  521964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:07:14.110653  521964 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1201 21:07:14.112111  521964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:07:14.112234  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.121522  521964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:07:14.121606  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.132262  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.141626  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.151111  521964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:07:14.160033  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.169622  521964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.178443  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.187976  521964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:07:14.194851  521964 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1201 21:07:14.196003  521964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:07:14.203835  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.312679  521964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:07:14.495171  521964 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:07:14.495301  521964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:07:14.499086  521964 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1201 21:07:14.499110  521964 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1201 21:07:14.499118  521964 command_runner.go:130] > Device: 0,72	Inode: 1746        Links: 1
	I1201 21:07:14.499125  521964 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:14.499150  521964 command_runner.go:130] > Access: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499176  521964 command_runner.go:130] > Modify: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499186  521964 command_runner.go:130] > Change: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499190  521964 command_runner.go:130] >  Birth: -
	I1201 21:07:14.499219  521964 start.go:564] Will wait 60s for crictl version
	I1201 21:07:14.499275  521964 ssh_runner.go:195] Run: which crictl
	I1201 21:07:14.502678  521964 command_runner.go:130] > /usr/local/bin/crictl
	I1201 21:07:14.502996  521964 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:07:14.524882  521964 command_runner.go:130] > Version:  0.1.0
	I1201 21:07:14.524906  521964 command_runner.go:130] > RuntimeName:  cri-o
	I1201 21:07:14.524912  521964 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1201 21:07:14.524918  521964 command_runner.go:130] > RuntimeApiVersion:  v1
	I1201 21:07:14.526840  521964 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:07:14.526982  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.553910  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.553933  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.553939  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.553944  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.553950  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.553971  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.553976  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.553980  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.553984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.553987  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.553991  521964 command_runner.go:130] >      static
	I1201 21:07:14.553994  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.553998  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.554001  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.554009  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.554012  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.554016  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.554020  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.554024  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.554028  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.556106  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.582720  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.582784  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.582817  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.582840  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.582863  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.582897  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.582922  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.582947  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.582984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.583008  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.583029  521964 command_runner.go:130] >      static
	I1201 21:07:14.583063  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.583085  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.583101  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.583121  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.583170  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.583196  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.583217  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.583262  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.583287  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.589911  521964 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:07:14.592808  521964 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:07:14.609405  521964 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:07:14.613461  521964 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1201 21:07:14.613638  521964 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:07:14.613753  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:14.613807  521964 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:07:14.655721  521964 command_runner.go:130] > {
	I1201 21:07:14.655745  521964 command_runner.go:130] >   "images":  [
	I1201 21:07:14.655750  521964 command_runner.go:130] >     {
	I1201 21:07:14.655758  521964 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1201 21:07:14.655763  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655768  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1201 21:07:14.655771  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655775  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655786  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1201 21:07:14.655790  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655794  521964 command_runner.go:130] >       "size":  "29035622",
	I1201 21:07:14.655798  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655803  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655811  521964 command_runner.go:130] >     },
	I1201 21:07:14.655815  521964 command_runner.go:130] >     {
	I1201 21:07:14.655825  521964 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1201 21:07:14.655839  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655846  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1201 21:07:14.655854  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655858  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655866  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1201 21:07:14.655871  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655876  521964 command_runner.go:130] >       "size":  "74488375",
	I1201 21:07:14.655880  521964 command_runner.go:130] >       "username":  "nonroot",
	I1201 21:07:14.655884  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655888  521964 command_runner.go:130] >     },
	I1201 21:07:14.655891  521964 command_runner.go:130] >     {
	I1201 21:07:14.655901  521964 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1201 21:07:14.655907  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655912  521964 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1201 21:07:14.655918  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655927  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655946  521964 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1201 21:07:14.655955  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655960  521964 command_runner.go:130] >       "size":  "60854229",
	I1201 21:07:14.655965  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.655974  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.655978  521964 command_runner.go:130] >       },
	I1201 21:07:14.655982  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655986  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655989  521964 command_runner.go:130] >     },
	I1201 21:07:14.655995  521964 command_runner.go:130] >     {
	I1201 21:07:14.656002  521964 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1201 21:07:14.656010  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656015  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1201 21:07:14.656018  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656024  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656033  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1201 21:07:14.656040  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656044  521964 command_runner.go:130] >       "size":  "84947242",
	I1201 21:07:14.656047  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656051  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656061  521964 command_runner.go:130] >       },
	I1201 21:07:14.656065  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656068  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656071  521964 command_runner.go:130] >     },
	I1201 21:07:14.656075  521964 command_runner.go:130] >     {
	I1201 21:07:14.656084  521964 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1201 21:07:14.656090  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656096  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1201 21:07:14.656100  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656106  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656115  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1201 21:07:14.656121  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656132  521964 command_runner.go:130] >       "size":  "72167568",
	I1201 21:07:14.656139  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656143  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656146  521964 command_runner.go:130] >       },
	I1201 21:07:14.656150  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656154  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656160  521964 command_runner.go:130] >     },
	I1201 21:07:14.656163  521964 command_runner.go:130] >     {
	I1201 21:07:14.656170  521964 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1201 21:07:14.656176  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656182  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1201 21:07:14.656185  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656209  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656218  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1201 21:07:14.656223  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656228  521964 command_runner.go:130] >       "size":  "74105124",
	I1201 21:07:14.656231  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656236  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656241  521964 command_runner.go:130] >     },
	I1201 21:07:14.656245  521964 command_runner.go:130] >     {
	I1201 21:07:14.656251  521964 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1201 21:07:14.656257  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656262  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1201 21:07:14.656268  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656272  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656279  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1201 21:07:14.656285  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656289  521964 command_runner.go:130] >       "size":  "49819792",
	I1201 21:07:14.656293  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656303  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656307  521964 command_runner.go:130] >       },
	I1201 21:07:14.656311  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656316  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656323  521964 command_runner.go:130] >     },
	I1201 21:07:14.656330  521964 command_runner.go:130] >     {
	I1201 21:07:14.656337  521964 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1201 21:07:14.656341  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656345  521964 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.656350  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656355  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656365  521964 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1201 21:07:14.656368  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656372  521964 command_runner.go:130] >       "size":  "517328",
	I1201 21:07:14.656378  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656383  521964 command_runner.go:130] >         "value":  "65535"
	I1201 21:07:14.656388  521964 command_runner.go:130] >       },
	I1201 21:07:14.656392  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656395  521964 command_runner.go:130] >       "pinned":  true
	I1201 21:07:14.656399  521964 command_runner.go:130] >     }
	I1201 21:07:14.656404  521964 command_runner.go:130] >   ]
	I1201 21:07:14.656408  521964 command_runner.go:130] > }
	I1201 21:07:14.656549  521964 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:07:14.656561  521964 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:07:14.656568  521964 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:07:14.656668  521964 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:07:14.656752  521964 ssh_runner.go:195] Run: crio config
	I1201 21:07:14.734869  521964 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1201 21:07:14.734915  521964 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1201 21:07:14.734928  521964 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1201 21:07:14.734945  521964 command_runner.go:130] > #
	I1201 21:07:14.734957  521964 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1201 21:07:14.734978  521964 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1201 21:07:14.734989  521964 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1201 21:07:14.735001  521964 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1201 21:07:14.735009  521964 command_runner.go:130] > # reload'.
	I1201 21:07:14.735017  521964 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1201 21:07:14.735028  521964 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1201 21:07:14.735038  521964 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1201 21:07:14.735051  521964 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1201 21:07:14.735059  521964 command_runner.go:130] > [crio]
	I1201 21:07:14.735069  521964 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1201 21:07:14.735078  521964 command_runner.go:130] > # containers images, in this directory.
	I1201 21:07:14.735108  521964 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1201 21:07:14.735125  521964 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1201 21:07:14.735149  521964 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1201 21:07:14.735158  521964 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1201 21:07:14.735167  521964 command_runner.go:130] > # imagestore = ""
	I1201 21:07:14.735180  521964 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1201 21:07:14.735200  521964 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1201 21:07:14.735401  521964 command_runner.go:130] > # storage_driver = "overlay"
	I1201 21:07:14.735416  521964 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1201 21:07:14.735422  521964 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1201 21:07:14.735427  521964 command_runner.go:130] > # storage_option = [
	I1201 21:07:14.735430  521964 command_runner.go:130] > # ]
	I1201 21:07:14.735440  521964 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1201 21:07:14.735447  521964 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1201 21:07:14.735451  521964 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1201 21:07:14.735457  521964 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1201 21:07:14.735464  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1201 21:07:14.735475  521964 command_runner.go:130] > # always happen on a node reboot
	I1201 21:07:14.735773  521964 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1201 21:07:14.735799  521964 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1201 21:07:14.735807  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1201 21:07:14.735813  521964 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1201 21:07:14.735817  521964 command_runner.go:130] > # version_file_persist = ""
	I1201 21:07:14.735825  521964 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1201 21:07:14.735839  521964 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1201 21:07:14.735844  521964 command_runner.go:130] > # internal_wipe = true
	I1201 21:07:14.735852  521964 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1201 21:07:14.735858  521964 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1201 21:07:14.735861  521964 command_runner.go:130] > # internal_repair = true
	I1201 21:07:14.735867  521964 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1201 21:07:14.735873  521964 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1201 21:07:14.735882  521964 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1201 21:07:14.735891  521964 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1201 21:07:14.735901  521964 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1201 21:07:14.735904  521964 command_runner.go:130] > [crio.api]
	I1201 21:07:14.735909  521964 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1201 21:07:14.735916  521964 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1201 21:07:14.735921  521964 command_runner.go:130] > # IP address on which the stream server will listen.
	I1201 21:07:14.735925  521964 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1201 21:07:14.735932  521964 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1201 21:07:14.735946  521964 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1201 21:07:14.735950  521964 command_runner.go:130] > # stream_port = "0"
	I1201 21:07:14.735958  521964 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1201 21:07:14.735962  521964 command_runner.go:130] > # stream_enable_tls = false
	I1201 21:07:14.735968  521964 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1201 21:07:14.735972  521964 command_runner.go:130] > # stream_idle_timeout = ""
	I1201 21:07:14.735981  521964 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1201 21:07:14.735991  521964 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1201 21:07:14.735995  521964 command_runner.go:130] > # stream_tls_cert = ""
	I1201 21:07:14.736001  521964 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1201 21:07:14.736006  521964 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1201 21:07:14.736013  521964 command_runner.go:130] > # stream_tls_key = ""
	I1201 21:07:14.736023  521964 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1201 21:07:14.736030  521964 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1201 21:07:14.736037  521964 command_runner.go:130] > # automatically pick up the changes.
	I1201 21:07:14.736045  521964 command_runner.go:130] > # stream_tls_ca = ""
	I1201 21:07:14.736072  521964 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736077  521964 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1201 21:07:14.736085  521964 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736092  521964 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1201 21:07:14.736099  521964 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1201 21:07:14.736105  521964 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1201 21:07:14.736108  521964 command_runner.go:130] > [crio.runtime]
	I1201 21:07:14.736114  521964 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1201 21:07:14.736119  521964 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1201 21:07:14.736127  521964 command_runner.go:130] > # "nofile=1024:2048"
	I1201 21:07:14.736134  521964 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1201 21:07:14.736138  521964 command_runner.go:130] > # default_ulimits = [
	I1201 21:07:14.736141  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736146  521964 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1201 21:07:14.736150  521964 command_runner.go:130] > # no_pivot = false
	I1201 21:07:14.736162  521964 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1201 21:07:14.736168  521964 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1201 21:07:14.736196  521964 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1201 21:07:14.736202  521964 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1201 21:07:14.736210  521964 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1201 21:07:14.736220  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736223  521964 command_runner.go:130] > # conmon = ""
	I1201 21:07:14.736228  521964 command_runner.go:130] > # Cgroup setting for conmon
	I1201 21:07:14.736235  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1201 21:07:14.736239  521964 command_runner.go:130] > conmon_cgroup = "pod"
	I1201 21:07:14.736257  521964 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1201 21:07:14.736262  521964 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1201 21:07:14.736269  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736273  521964 command_runner.go:130] > # conmon_env = [
	I1201 21:07:14.736276  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736281  521964 command_runner.go:130] > # Additional environment variables to set for all the
	I1201 21:07:14.736286  521964 command_runner.go:130] > # containers. These are overridden if set in the
	I1201 21:07:14.736295  521964 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1201 21:07:14.736302  521964 command_runner.go:130] > # default_env = [
	I1201 21:07:14.736308  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736314  521964 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1201 21:07:14.736322  521964 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1201 21:07:14.736328  521964 command_runner.go:130] > # selinux = false
	I1201 21:07:14.736356  521964 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1201 21:07:14.736370  521964 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1201 21:07:14.736375  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736379  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.736388  521964 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1201 21:07:14.736393  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736397  521964 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1201 21:07:14.736406  521964 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1201 21:07:14.736413  521964 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1201 21:07:14.736419  521964 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1201 21:07:14.736425  521964 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1201 21:07:14.736431  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736439  521964 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1201 21:07:14.736445  521964 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1201 21:07:14.736449  521964 command_runner.go:130] > # the cgroup blockio controller.
	I1201 21:07:14.736452  521964 command_runner.go:130] > # blockio_config_file = ""
	I1201 21:07:14.736459  521964 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1201 21:07:14.736463  521964 command_runner.go:130] > # blockio parameters.
	I1201 21:07:14.736467  521964 command_runner.go:130] > # blockio_reload = false
	I1201 21:07:14.736474  521964 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1201 21:07:14.736477  521964 command_runner.go:130] > # irqbalance daemon.
	I1201 21:07:14.736483  521964 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1201 21:07:14.736489  521964 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1201 21:07:14.736496  521964 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1201 21:07:14.736508  521964 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1201 21:07:14.736514  521964 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1201 21:07:14.736523  521964 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1201 21:07:14.736532  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736536  521964 command_runner.go:130] > # rdt_config_file = ""
	I1201 21:07:14.736545  521964 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1201 21:07:14.736550  521964 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1201 21:07:14.736555  521964 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1201 21:07:14.736560  521964 command_runner.go:130] > # separate_pull_cgroup = ""
	I1201 21:07:14.736569  521964 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1201 21:07:14.736576  521964 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1201 21:07:14.736580  521964 command_runner.go:130] > # will be added.
	I1201 21:07:14.736585  521964 command_runner.go:130] > # default_capabilities = [
	I1201 21:07:14.737078  521964 command_runner.go:130] > # 	"CHOWN",
	I1201 21:07:14.737092  521964 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1201 21:07:14.737096  521964 command_runner.go:130] > # 	"FSETID",
	I1201 21:07:14.737099  521964 command_runner.go:130] > # 	"FOWNER",
	I1201 21:07:14.737102  521964 command_runner.go:130] > # 	"SETGID",
	I1201 21:07:14.737106  521964 command_runner.go:130] > # 	"SETUID",
	I1201 21:07:14.737130  521964 command_runner.go:130] > # 	"SETPCAP",
	I1201 21:07:14.737134  521964 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1201 21:07:14.737138  521964 command_runner.go:130] > # 	"KILL",
	I1201 21:07:14.737144  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737153  521964 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1201 21:07:14.737160  521964 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1201 21:07:14.737165  521964 command_runner.go:130] > # add_inheritable_capabilities = false
	I1201 21:07:14.737171  521964 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1201 21:07:14.737189  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737193  521964 command_runner.go:130] > default_sysctls = [
	I1201 21:07:14.737198  521964 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1201 21:07:14.737200  521964 command_runner.go:130] > ]
	I1201 21:07:14.737205  521964 command_runner.go:130] > # List of devices on the host that a
	I1201 21:07:14.737212  521964 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1201 21:07:14.737215  521964 command_runner.go:130] > # allowed_devices = [
	I1201 21:07:14.737219  521964 command_runner.go:130] > # 	"/dev/fuse",
	I1201 21:07:14.737222  521964 command_runner.go:130] > # 	"/dev/net/tun",
	I1201 21:07:14.737225  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737230  521964 command_runner.go:130] > # List of additional devices. specified as
	I1201 21:07:14.737237  521964 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1201 21:07:14.737243  521964 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1201 21:07:14.737249  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737253  521964 command_runner.go:130] > # additional_devices = [
	I1201 21:07:14.737257  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737266  521964 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1201 21:07:14.737271  521964 command_runner.go:130] > # cdi_spec_dirs = [
	I1201 21:07:14.737274  521964 command_runner.go:130] > # 	"/etc/cdi",
	I1201 21:07:14.737277  521964 command_runner.go:130] > # 	"/var/run/cdi",
	I1201 21:07:14.737280  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737286  521964 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1201 21:07:14.737293  521964 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1201 21:07:14.737297  521964 command_runner.go:130] > # Defaults to false.
	I1201 21:07:14.737311  521964 command_runner.go:130] > # device_ownership_from_security_context = false
	I1201 21:07:14.737318  521964 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1201 21:07:14.737324  521964 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1201 21:07:14.737327  521964 command_runner.go:130] > # hooks_dir = [
	I1201 21:07:14.737335  521964 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1201 21:07:14.737338  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737344  521964 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1201 21:07:14.737352  521964 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1201 21:07:14.737357  521964 command_runner.go:130] > # its default mounts from the following two files:
	I1201 21:07:14.737360  521964 command_runner.go:130] > #
	I1201 21:07:14.737366  521964 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1201 21:07:14.737372  521964 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1201 21:07:14.737378  521964 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1201 21:07:14.737380  521964 command_runner.go:130] > #
	I1201 21:07:14.737386  521964 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1201 21:07:14.737393  521964 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1201 21:07:14.737399  521964 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1201 21:07:14.737407  521964 command_runner.go:130] > #      only add mounts it finds in this file.
	I1201 21:07:14.737410  521964 command_runner.go:130] > #
	I1201 21:07:14.737414  521964 command_runner.go:130] > # default_mounts_file = ""
	I1201 21:07:14.737422  521964 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1201 21:07:14.737429  521964 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1201 21:07:14.737433  521964 command_runner.go:130] > # pids_limit = -1
	I1201 21:07:14.737440  521964 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1201 21:07:14.737446  521964 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1201 21:07:14.737452  521964 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1201 21:07:14.737460  521964 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1201 21:07:14.737464  521964 command_runner.go:130] > # log_size_max = -1
	I1201 21:07:14.737472  521964 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1201 21:07:14.737476  521964 command_runner.go:130] > # log_to_journald = false
	I1201 21:07:14.737487  521964 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1201 21:07:14.737492  521964 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1201 21:07:14.737497  521964 command_runner.go:130] > # Path to directory for container attach sockets.
	I1201 21:07:14.737502  521964 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1201 21:07:14.737511  521964 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1201 21:07:14.737516  521964 command_runner.go:130] > # bind_mount_prefix = ""
	I1201 21:07:14.737521  521964 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1201 21:07:14.737528  521964 command_runner.go:130] > # read_only = false
	I1201 21:07:14.737534  521964 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1201 21:07:14.737541  521964 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1201 21:07:14.737545  521964 command_runner.go:130] > # live configuration reload.
	I1201 21:07:14.737549  521964 command_runner.go:130] > # log_level = "info"
	I1201 21:07:14.737557  521964 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1201 21:07:14.737563  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.737567  521964 command_runner.go:130] > # log_filter = ""
	I1201 21:07:14.737573  521964 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737583  521964 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1201 21:07:14.737588  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737596  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737599  521964 command_runner.go:130] > # uid_mappings = ""
	I1201 21:07:14.737606  521964 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737612  521964 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1201 21:07:14.737616  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737624  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737627  521964 command_runner.go:130] > # gid_mappings = ""
	I1201 21:07:14.737634  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1201 21:07:14.737640  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737646  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737660  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737665  521964 command_runner.go:130] > # minimum_mappable_uid = -1
	I1201 21:07:14.737674  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1201 21:07:14.737681  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737686  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737694  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737937  521964 command_runner.go:130] > # minimum_mappable_gid = -1
	I1201 21:07:14.737957  521964 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1201 21:07:14.737967  521964 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1201 21:07:14.737974  521964 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1201 21:07:14.737980  521964 command_runner.go:130] > # ctr_stop_timeout = 30
	I1201 21:07:14.737998  521964 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1201 21:07:14.738018  521964 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1201 21:07:14.738028  521964 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1201 21:07:14.738033  521964 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1201 21:07:14.738042  521964 command_runner.go:130] > # drop_infra_ctr = true
	I1201 21:07:14.738048  521964 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1201 21:07:14.738058  521964 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1201 21:07:14.738073  521964 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1201 21:07:14.738082  521964 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1201 21:07:14.738090  521964 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1201 21:07:14.738099  521964 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1201 21:07:14.738106  521964 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1201 21:07:14.738116  521964 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1201 21:07:14.738120  521964 command_runner.go:130] > # shared_cpuset = ""
	I1201 21:07:14.738130  521964 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1201 21:07:14.738139  521964 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1201 21:07:14.738154  521964 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1201 21:07:14.738162  521964 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1201 21:07:14.738167  521964 command_runner.go:130] > # pinns_path = ""
	I1201 21:07:14.738173  521964 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1201 21:07:14.738182  521964 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1201 21:07:14.738191  521964 command_runner.go:130] > # enable_criu_support = true
	I1201 21:07:14.738197  521964 command_runner.go:130] > # Enable/disable the generation of the container,
	I1201 21:07:14.738206  521964 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1201 21:07:14.738221  521964 command_runner.go:130] > # enable_pod_events = false
	I1201 21:07:14.738232  521964 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1201 21:07:14.738238  521964 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1201 21:07:14.738242  521964 command_runner.go:130] > # default_runtime = "crun"
	I1201 21:07:14.738251  521964 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1201 21:07:14.738259  521964 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1201 21:07:14.738269  521964 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1201 21:07:14.738278  521964 command_runner.go:130] > # creation as a file is not desired either.
	I1201 21:07:14.738287  521964 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1201 21:07:14.738304  521964 command_runner.go:130] > # the hostname is being managed dynamically.
	I1201 21:07:14.738322  521964 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1201 21:07:14.738329  521964 command_runner.go:130] > # ]
	I1201 21:07:14.738336  521964 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1201 21:07:14.738347  521964 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1201 21:07:14.738353  521964 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1201 21:07:14.738358  521964 command_runner.go:130] > # Each entry in the table should follow the format:
	I1201 21:07:14.738365  521964 command_runner.go:130] > #
	I1201 21:07:14.738381  521964 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1201 21:07:14.738387  521964 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1201 21:07:14.738394  521964 command_runner.go:130] > # runtime_type = "oci"
	I1201 21:07:14.738400  521964 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1201 21:07:14.738408  521964 command_runner.go:130] > # inherit_default_runtime = false
	I1201 21:07:14.738414  521964 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1201 21:07:14.738421  521964 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1201 21:07:14.738426  521964 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1201 21:07:14.738434  521964 command_runner.go:130] > # monitor_env = []
	I1201 21:07:14.738439  521964 command_runner.go:130] > # privileged_without_host_devices = false
	I1201 21:07:14.738449  521964 command_runner.go:130] > # allowed_annotations = []
	I1201 21:07:14.738459  521964 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1201 21:07:14.738463  521964 command_runner.go:130] > # no_sync_log = false
	I1201 21:07:14.738469  521964 command_runner.go:130] > # default_annotations = {}
	I1201 21:07:14.738473  521964 command_runner.go:130] > # stream_websockets = false
	I1201 21:07:14.738481  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.738515  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.738533  521964 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1201 21:07:14.738539  521964 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1201 21:07:14.738546  521964 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1201 21:07:14.738556  521964 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1201 21:07:14.738560  521964 command_runner.go:130] > #   in $PATH.
	I1201 21:07:14.738572  521964 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1201 21:07:14.738581  521964 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1201 21:07:14.738587  521964 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1201 21:07:14.738601  521964 command_runner.go:130] > #   state.
	I1201 21:07:14.738612  521964 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1201 21:07:14.738623  521964 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1201 21:07:14.738629  521964 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1201 21:07:14.738641  521964 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1201 21:07:14.738648  521964 command_runner.go:130] > #   the values from the default runtime on load time.
	I1201 21:07:14.738658  521964 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1201 21:07:14.738675  521964 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1201 21:07:14.738686  521964 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1201 21:07:14.738697  521964 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1201 21:07:14.738706  521964 command_runner.go:130] > #   The currently recognized values are:
	I1201 21:07:14.738713  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1201 21:07:14.738722  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1201 21:07:14.738731  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1201 21:07:14.738737  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1201 21:07:14.738751  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1201 21:07:14.738762  521964 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1201 21:07:14.738774  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1201 21:07:14.738785  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1201 21:07:14.738795  521964 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1201 21:07:14.738801  521964 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1201 21:07:14.738814  521964 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1201 21:07:14.738830  521964 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1201 21:07:14.738841  521964 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1201 21:07:14.738847  521964 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1201 21:07:14.738857  521964 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1201 21:07:14.738871  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1201 21:07:14.738878  521964 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1201 21:07:14.738885  521964 command_runner.go:130] > #   deprecated option "conmon".
	I1201 21:07:14.738904  521964 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1201 21:07:14.738913  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1201 21:07:14.738921  521964 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1201 21:07:14.738930  521964 command_runner.go:130] > #   should be moved to the container's cgroup
	I1201 21:07:14.738937  521964 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1201 21:07:14.738949  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1201 21:07:14.738961  521964 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1201 21:07:14.738974  521964 command_runner.go:130] > #   conmon-rs by using:
	I1201 21:07:14.738982  521964 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1201 21:07:14.738996  521964 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1201 21:07:14.739008  521964 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1201 21:07:14.739024  521964 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1201 21:07:14.739033  521964 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1201 21:07:14.739040  521964 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1201 21:07:14.739057  521964 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1201 21:07:14.739067  521964 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1201 21:07:14.739077  521964 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1201 21:07:14.739089  521964 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1201 21:07:14.739097  521964 command_runner.go:130] > #   when a machine crash happens.
	I1201 21:07:14.739105  521964 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1201 21:07:14.739117  521964 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1201 21:07:14.739152  521964 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1201 21:07:14.739158  521964 command_runner.go:130] > #   seccomp profile for the runtime.
	I1201 21:07:14.739165  521964 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1201 21:07:14.739172  521964 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1201 21:07:14.739175  521964 command_runner.go:130] > #
	I1201 21:07:14.739179  521964 command_runner.go:130] > # Using the seccomp notifier feature:
	I1201 21:07:14.739182  521964 command_runner.go:130] > #
	I1201 21:07:14.739188  521964 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1201 21:07:14.739195  521964 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1201 21:07:14.739204  521964 command_runner.go:130] > #
	I1201 21:07:14.739211  521964 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1201 21:07:14.739217  521964 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1201 21:07:14.739220  521964 command_runner.go:130] > #
	I1201 21:07:14.739225  521964 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1201 21:07:14.739228  521964 command_runner.go:130] > # feature.
	I1201 21:07:14.739231  521964 command_runner.go:130] > #
	I1201 21:07:14.739237  521964 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1201 21:07:14.739247  521964 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1201 21:07:14.739257  521964 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1201 21:07:14.739263  521964 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1201 21:07:14.739270  521964 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1201 21:07:14.739281  521964 command_runner.go:130] > #
	I1201 21:07:14.739288  521964 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1201 21:07:14.739293  521964 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1201 21:07:14.739296  521964 command_runner.go:130] > #
	I1201 21:07:14.739302  521964 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1201 21:07:14.739308  521964 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1201 21:07:14.739310  521964 command_runner.go:130] > #
	I1201 21:07:14.739316  521964 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1201 21:07:14.739322  521964 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1201 21:07:14.739325  521964 command_runner.go:130] > # limitation.
	I1201 21:07:14.739329  521964 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1201 21:07:14.739334  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1201 21:07:14.739337  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739341  521964 command_runner.go:130] > runtime_root = "/run/crun"
	I1201 21:07:14.739345  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739356  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739360  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739365  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739369  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739373  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739380  521964 command_runner.go:130] > allowed_annotations = [
	I1201 21:07:14.739384  521964 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1201 21:07:14.739391  521964 command_runner.go:130] > ]
	I1201 21:07:14.739396  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739400  521964 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1201 21:07:14.739409  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1201 21:07:14.739413  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739420  521964 command_runner.go:130] > runtime_root = "/run/runc"
	I1201 21:07:14.739434  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739442  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739450  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739455  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739459  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739465  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739470  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739481  521964 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1201 21:07:14.739490  521964 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1201 21:07:14.739507  521964 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1201 21:07:14.739519  521964 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1201 21:07:14.739534  521964 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1201 21:07:14.739546  521964 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1201 21:07:14.739559  521964 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1201 21:07:14.739569  521964 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1201 21:07:14.739589  521964 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1201 21:07:14.739601  521964 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1201 21:07:14.739616  521964 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1201 21:07:14.739627  521964 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1201 21:07:14.739635  521964 command_runner.go:130] > # Example:
	I1201 21:07:14.739639  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1201 21:07:14.739652  521964 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1201 21:07:14.739663  521964 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1201 21:07:14.739669  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1201 21:07:14.739672  521964 command_runner.go:130] > # cpuset = "0-1"
	I1201 21:07:14.739681  521964 command_runner.go:130] > # cpushares = "5"
	I1201 21:07:14.739685  521964 command_runner.go:130] > # cpuquota = "1000"
	I1201 21:07:14.739694  521964 command_runner.go:130] > # cpuperiod = "100000"
	I1201 21:07:14.739698  521964 command_runner.go:130] > # cpulimit = "35"
	I1201 21:07:14.739705  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.739709  521964 command_runner.go:130] > # The workload name is workload-type.
	I1201 21:07:14.739716  521964 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1201 21:07:14.739728  521964 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1201 21:07:14.739739  521964 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1201 21:07:14.739752  521964 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1201 21:07:14.739762  521964 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1201 21:07:14.739768  521964 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1201 21:07:14.739778  521964 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1201 21:07:14.739786  521964 command_runner.go:130] > # Default value is set to true
	I1201 21:07:14.739791  521964 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1201 21:07:14.739803  521964 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1201 21:07:14.739813  521964 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1201 21:07:14.739818  521964 command_runner.go:130] > # Default value is set to 'false'
	I1201 21:07:14.739822  521964 command_runner.go:130] > # disable_hostport_mapping = false
	I1201 21:07:14.739830  521964 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1201 21:07:14.739839  521964 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1201 21:07:14.739846  521964 command_runner.go:130] > # timezone = ""
	I1201 21:07:14.739853  521964 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1201 21:07:14.739859  521964 command_runner.go:130] > #
	I1201 21:07:14.739866  521964 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1201 21:07:14.739884  521964 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1201 21:07:14.739892  521964 command_runner.go:130] > [crio.image]
	I1201 21:07:14.739898  521964 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1201 21:07:14.739903  521964 command_runner.go:130] > # default_transport = "docker://"
	I1201 21:07:14.739913  521964 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1201 21:07:14.739919  521964 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739926  521964 command_runner.go:130] > # global_auth_file = ""
	I1201 21:07:14.739931  521964 command_runner.go:130] > # The image used to instantiate infra containers.
	I1201 21:07:14.739940  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739952  521964 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.739964  521964 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1201 21:07:14.739973  521964 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739979  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739986  521964 command_runner.go:130] > # pause_image_auth_file = ""
	I1201 21:07:14.739993  521964 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1201 21:07:14.740002  521964 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1201 21:07:14.740009  521964 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1201 21:07:14.740029  521964 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1201 21:07:14.740037  521964 command_runner.go:130] > # pause_command = "/pause"
	I1201 21:07:14.740044  521964 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1201 21:07:14.740053  521964 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1201 21:07:14.740060  521964 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1201 21:07:14.740070  521964 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1201 21:07:14.740076  521964 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1201 21:07:14.740086  521964 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1201 21:07:14.740091  521964 command_runner.go:130] > # pinned_images = [
	I1201 21:07:14.740093  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740110  521964 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1201 21:07:14.740121  521964 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1201 21:07:14.740133  521964 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1201 21:07:14.740143  521964 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1201 21:07:14.740153  521964 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1201 21:07:14.740158  521964 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1201 21:07:14.740167  521964 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1201 21:07:14.740181  521964 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1201 21:07:14.740204  521964 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1201 21:07:14.740215  521964 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1201 21:07:14.740226  521964 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1201 21:07:14.740236  521964 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1201 21:07:14.740243  521964 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1201 21:07:14.740259  521964 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1201 21:07:14.740263  521964 command_runner.go:130] > # changing them here.
	I1201 21:07:14.740273  521964 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1201 21:07:14.740278  521964 command_runner.go:130] > # insecure_registries = [
	I1201 21:07:14.740285  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740293  521964 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1201 21:07:14.740302  521964 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1201 21:07:14.740306  521964 command_runner.go:130] > # image_volumes = "mkdir"
	I1201 21:07:14.740316  521964 command_runner.go:130] > # Temporary directory to use for storing big files
	I1201 21:07:14.740321  521964 command_runner.go:130] > # big_files_temporary_dir = ""
	I1201 21:07:14.740340  521964 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1201 21:07:14.740349  521964 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1201 21:07:14.740358  521964 command_runner.go:130] > # auto_reload_registries = false
	I1201 21:07:14.740364  521964 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1201 21:07:14.740376  521964 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1201 21:07:14.740387  521964 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1201 21:07:14.740391  521964 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1201 21:07:14.740399  521964 command_runner.go:130] > # The mode of short name resolution.
	I1201 21:07:14.740415  521964 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1201 21:07:14.740423  521964 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1201 21:07:14.740428  521964 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1201 21:07:14.740436  521964 command_runner.go:130] > # short_name_mode = "enforcing"
	I1201 21:07:14.740443  521964 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1201 21:07:14.740453  521964 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1201 21:07:14.740462  521964 command_runner.go:130] > # oci_artifact_mount_support = true
	I1201 21:07:14.740469  521964 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1201 21:07:14.740484  521964 command_runner.go:130] > # CNI plugins.
	I1201 21:07:14.740492  521964 command_runner.go:130] > [crio.network]
	I1201 21:07:14.740498  521964 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1201 21:07:14.740504  521964 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1201 21:07:14.740512  521964 command_runner.go:130] > # cni_default_network = ""
	I1201 21:07:14.740519  521964 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1201 21:07:14.740530  521964 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1201 21:07:14.740540  521964 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1201 21:07:14.740549  521964 command_runner.go:130] > # plugin_dirs = [
	I1201 21:07:14.740562  521964 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1201 21:07:14.740566  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740576  521964 command_runner.go:130] > # List of included pod metrics.
	I1201 21:07:14.740580  521964 command_runner.go:130] > # included_pod_metrics = [
	I1201 21:07:14.740583  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740588  521964 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1201 21:07:14.740596  521964 command_runner.go:130] > [crio.metrics]
	I1201 21:07:14.740602  521964 command_runner.go:130] > # Globally enable or disable metrics support.
	I1201 21:07:14.740614  521964 command_runner.go:130] > # enable_metrics = false
	I1201 21:07:14.740622  521964 command_runner.go:130] > # Specify enabled metrics collectors.
	I1201 21:07:14.740637  521964 command_runner.go:130] > # Per default all metrics are enabled.
	I1201 21:07:14.740644  521964 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1201 21:07:14.740655  521964 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1201 21:07:14.740662  521964 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1201 21:07:14.740666  521964 command_runner.go:130] > # metrics_collectors = [
	I1201 21:07:14.740674  521964 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1201 21:07:14.740680  521964 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1201 21:07:14.740688  521964 command_runner.go:130] > # 	"containers_oom_total",
	I1201 21:07:14.740692  521964 command_runner.go:130] > # 	"processes_defunct",
	I1201 21:07:14.740706  521964 command_runner.go:130] > # 	"operations_total",
	I1201 21:07:14.740714  521964 command_runner.go:130] > # 	"operations_latency_seconds",
	I1201 21:07:14.740719  521964 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1201 21:07:14.740727  521964 command_runner.go:130] > # 	"operations_errors_total",
	I1201 21:07:14.740731  521964 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1201 21:07:14.740736  521964 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1201 21:07:14.740740  521964 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1201 21:07:14.740748  521964 command_runner.go:130] > # 	"image_pulls_success_total",
	I1201 21:07:14.740753  521964 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1201 21:07:14.740761  521964 command_runner.go:130] > # 	"containers_oom_count_total",
	I1201 21:07:14.740766  521964 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1201 21:07:14.740780  521964 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1201 21:07:14.740789  521964 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1201 21:07:14.740792  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740803  521964 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1201 21:07:14.740807  521964 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1201 21:07:14.740812  521964 command_runner.go:130] > # The port on which the metrics server will listen.
	I1201 21:07:14.740816  521964 command_runner.go:130] > # metrics_port = 9090
	I1201 21:07:14.740825  521964 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1201 21:07:14.740829  521964 command_runner.go:130] > # metrics_socket = ""
	I1201 21:07:14.740839  521964 command_runner.go:130] > # The certificate for the secure metrics server.
	I1201 21:07:14.740846  521964 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1201 21:07:14.740867  521964 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1201 21:07:14.740879  521964 command_runner.go:130] > # certificate on any modification event.
	I1201 21:07:14.740883  521964 command_runner.go:130] > # metrics_cert = ""
	I1201 21:07:14.740888  521964 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1201 21:07:14.740897  521964 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1201 21:07:14.740901  521964 command_runner.go:130] > # metrics_key = ""
	I1201 21:07:14.740912  521964 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1201 21:07:14.740916  521964 command_runner.go:130] > [crio.tracing]
	I1201 21:07:14.740933  521964 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1201 21:07:14.740941  521964 command_runner.go:130] > # enable_tracing = false
	I1201 21:07:14.740946  521964 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1201 21:07:14.740959  521964 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1201 21:07:14.740966  521964 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1201 21:07:14.740970  521964 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1201 21:07:14.740975  521964 command_runner.go:130] > # CRI-O NRI configuration.
	I1201 21:07:14.740982  521964 command_runner.go:130] > [crio.nri]
	I1201 21:07:14.740987  521964 command_runner.go:130] > # Globally enable or disable NRI.
	I1201 21:07:14.740993  521964 command_runner.go:130] > # enable_nri = true
	I1201 21:07:14.741004  521964 command_runner.go:130] > # NRI socket to listen on.
	I1201 21:07:14.741013  521964 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1201 21:07:14.741018  521964 command_runner.go:130] > # NRI plugin directory to use.
	I1201 21:07:14.741026  521964 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1201 21:07:14.741031  521964 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1201 21:07:14.741039  521964 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1201 21:07:14.741046  521964 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1201 21:07:14.741111  521964 command_runner.go:130] > # nri_disable_connections = false
	I1201 21:07:14.741122  521964 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1201 21:07:14.741131  521964 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1201 21:07:14.741137  521964 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1201 21:07:14.741142  521964 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1201 21:07:14.741156  521964 command_runner.go:130] > # NRI default validator configuration.
	I1201 21:07:14.741167  521964 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1201 21:07:14.741178  521964 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1201 21:07:14.741190  521964 command_runner.go:130] > # can be restricted/rejected:
	I1201 21:07:14.741198  521964 command_runner.go:130] > # - OCI hook injection
	I1201 21:07:14.741206  521964 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1201 21:07:14.741214  521964 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1201 21:07:14.741218  521964 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1201 21:07:14.741229  521964 command_runner.go:130] > # - adjustment of linux namespaces
	I1201 21:07:14.741241  521964 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1201 21:07:14.741252  521964 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1201 21:07:14.741262  521964 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1201 21:07:14.741268  521964 command_runner.go:130] > #
	I1201 21:07:14.741276  521964 command_runner.go:130] > # [crio.nri.default_validator]
	I1201 21:07:14.741281  521964 command_runner.go:130] > # nri_enable_default_validator = false
	I1201 21:07:14.741290  521964 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1201 21:07:14.741295  521964 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1201 21:07:14.741308  521964 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1201 21:07:14.741318  521964 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1201 21:07:14.741323  521964 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1201 21:07:14.741331  521964 command_runner.go:130] > # nri_validator_required_plugins = [
	I1201 21:07:14.741334  521964 command_runner.go:130] > # ]
	I1201 21:07:14.741344  521964 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1201 21:07:14.741350  521964 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1201 21:07:14.741357  521964 command_runner.go:130] > [crio.stats]
	I1201 21:07:14.741364  521964 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1201 21:07:14.741379  521964 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1201 21:07:14.741384  521964 command_runner.go:130] > # stats_collection_period = 0
	I1201 21:07:14.741390  521964 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1201 21:07:14.741400  521964 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1201 21:07:14.741409  521964 command_runner.go:130] > # collection_period = 0
	I1201 21:07:14.743695  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701489723Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1201 21:07:14.743741  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701919228Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1201 21:07:14.743753  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702192379Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1201 21:07:14.743761  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.70239116Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1201 21:07:14.743770  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702743464Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.743783  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.703251326Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1201 21:07:14.743797  521964 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1201 21:07:14.743882  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:14.743892  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:14.743907  521964 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:07:14.743929  521964 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:07:14.744055  521964 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:07:14.744124  521964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:07:14.751405  521964 command_runner.go:130] > kubeadm
	I1201 21:07:14.751425  521964 command_runner.go:130] > kubectl
	I1201 21:07:14.751429  521964 command_runner.go:130] > kubelet
	I1201 21:07:14.752384  521964 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:07:14.752448  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:07:14.760026  521964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:07:14.773137  521964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:07:14.786891  521964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1201 21:07:14.799994  521964 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:07:14.803501  521964 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1201 21:07:14.803615  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.920306  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:15.405274  521964 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:07:15.405300  521964 certs.go:195] generating shared ca certs ...
	I1201 21:07:15.405343  521964 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:15.405542  521964 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:07:15.405589  521964 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:07:15.405597  521964 certs.go:257] generating profile certs ...
	I1201 21:07:15.405726  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:07:15.405806  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:07:15.405849  521964 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:07:15.405858  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1201 21:07:15.405870  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1201 21:07:15.405880  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1201 21:07:15.405895  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1201 21:07:15.405908  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1201 21:07:15.405920  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1201 21:07:15.405931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1201 21:07:15.405941  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1201 21:07:15.406006  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:07:15.406049  521964 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:07:15.406068  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:07:15.406113  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:07:15.406137  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:07:15.406172  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:07:15.406237  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:15.406287  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem -> /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.406308  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.406325  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.407085  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:07:15.435325  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:07:15.460453  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:07:15.484820  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:07:15.503541  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:07:15.522001  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:07:15.540074  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:07:15.557935  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:07:15.576709  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:07:15.595484  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:07:15.614431  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:07:15.632609  521964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:07:15.645463  521964 ssh_runner.go:195] Run: openssl version
	I1201 21:07:15.651732  521964 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1201 21:07:15.652120  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:07:15.660522  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664099  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664137  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664196  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.704899  521964 command_runner.go:130] > 51391683
	I1201 21:07:15.705348  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:07:15.713374  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:07:15.721756  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725563  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725613  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725662  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.766341  521964 command_runner.go:130] > 3ec20f2e
	I1201 21:07:15.766756  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:07:15.774531  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:07:15.784868  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788871  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788929  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788991  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.829962  521964 command_runner.go:130] > b5213941
	I1201 21:07:15.830101  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:07:15.838399  521964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842255  521964 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842282  521964 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1201 21:07:15.842289  521964 command_runner.go:130] > Device: 259,1	Inode: 2345358     Links: 1
	I1201 21:07:15.842296  521964 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:15.842308  521964 command_runner.go:130] > Access: 2025-12-01 21:03:07.261790641 +0000
	I1201 21:07:15.842313  521964 command_runner.go:130] > Modify: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842318  521964 command_runner.go:130] > Change: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842324  521964 command_runner.go:130] >  Birth: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842405  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:07:15.883885  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.884377  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:07:15.925029  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.925488  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:07:15.967363  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.967505  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:07:16.008933  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.009470  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:07:16.052395  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.052881  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:07:16.094441  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.094868  521964 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:16.094970  521964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:07:16.095033  521964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:07:16.122671  521964 cri.go:89] found id: ""
	I1201 21:07:16.122745  521964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:07:16.129629  521964 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1201 21:07:16.129704  521964 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1201 21:07:16.129749  521964 command_runner.go:130] > /var/lib/minikube/etcd:
	I1201 21:07:16.130618  521964 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:07:16.130634  521964 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:07:16.130700  521964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:07:16.138263  521964 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:07:16.138690  521964 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-198694" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.138796  521964 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-482752/kubeconfig needs updating (will repair): [kubeconfig missing "functional-198694" cluster setting kubeconfig missing "functional-198694" context setting]
	I1201 21:07:16.139097  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.139560  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.139697  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.140229  521964 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 21:07:16.140256  521964 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 21:07:16.140265  521964 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 21:07:16.140270  521964 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 21:07:16.140285  521964 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 21:07:16.140581  521964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:07:16.140673  521964 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1201 21:07:16.148484  521964 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1201 21:07:16.148518  521964 kubeadm.go:602] duration metric: took 17.877938ms to restartPrimaryControlPlane
	I1201 21:07:16.148528  521964 kubeadm.go:403] duration metric: took 53.667619ms to StartCluster
	I1201 21:07:16.148545  521964 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.148604  521964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.149244  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.149450  521964 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 21:07:16.149837  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:16.149887  521964 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 21:07:16.149959  521964 addons.go:70] Setting storage-provisioner=true in profile "functional-198694"
	I1201 21:07:16.149971  521964 addons.go:239] Setting addon storage-provisioner=true in "functional-198694"
	I1201 21:07:16.149997  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.150469  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.150813  521964 addons.go:70] Setting default-storageclass=true in profile "functional-198694"
	I1201 21:07:16.150847  521964 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-198694"
	I1201 21:07:16.151095  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.157800  521964 out.go:179] * Verifying Kubernetes components...
	I1201 21:07:16.160495  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:16.191854  521964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 21:07:16.194709  521964 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.194728  521964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 21:07:16.194804  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.200857  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.201020  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.201620  521964 addons.go:239] Setting addon default-storageclass=true in "functional-198694"
	I1201 21:07:16.201664  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.202447  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.245603  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.261120  521964 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:16.261144  521964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 21:07:16.261216  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.294119  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.373164  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:16.408855  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.445769  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.156317  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156488  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156559  521964 retry.go:31] will retry after 323.483538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156628  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156673  521964 retry.go:31] will retry after 132.387182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156540  521964 node_ready.go:35] waiting up to 6m0s for node "functional-198694" to be "Ready" ...
	I1201 21:07:17.156859  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.156951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.289607  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.345927  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.349389  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.349423  521964 retry.go:31] will retry after 369.598465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.480797  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.537300  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.541071  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.541105  521964 retry.go:31] will retry after 250.665906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.657414  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.657490  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.657803  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.720223  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.783305  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.783341  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.783362  521964 retry.go:31] will retry after 375.003536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.792548  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.854946  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.854989  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.855009  521964 retry.go:31] will retry after 643.882626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.157670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.158003  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:18.159267  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:18.225579  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.225683  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.225726  521964 retry.go:31] will retry after 1.172405999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.500161  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:18.566908  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.566958  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.566979  521964 retry.go:31] will retry after 1.221518169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.657190  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.657601  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.157332  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.157408  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.157736  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:19.157807  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:19.398291  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:19.478299  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.478401  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.478424  521964 retry.go:31] will retry after 725.636222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.657755  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.658075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.789414  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:19.847191  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.847229  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.847250  521964 retry.go:31] will retry after 688.680113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.157514  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.157586  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.157835  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:20.205210  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:20.265409  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.265448  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.265467  521964 retry.go:31] will retry after 1.46538703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.536913  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:20.597058  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.597109  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.597130  521964 retry.go:31] will retry after 1.65793185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.657434  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.657509  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.657856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.157726  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.157805  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.158133  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:21.158204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:21.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.657048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.731621  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:21.794486  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:21.794526  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:21.794546  521964 retry.go:31] will retry after 2.907930062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:22.255851  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:22.319449  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:22.319491  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.319511  521964 retry.go:31] will retry after 2.874628227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.157294  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:23.657472  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:24.157139  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.157221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.157543  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.657245  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.657316  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.657622  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.702795  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:24.765996  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:24.766044  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:24.766064  521964 retry.go:31] will retry after 4.286350529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.157658  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.157735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.158024  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:25.194368  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:25.250297  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:25.253946  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.253992  521964 retry.go:31] will retry after 4.844090269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.657643  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.657986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:25.658042  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:26.157893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.157964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.158227  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:26.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.657225  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.657521  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.657272  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:28.156970  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.157420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:28.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:28.657148  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.657244  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.657592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.053156  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:29.109834  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:29.112973  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.113004  521964 retry.go:31] will retry after 7.544668628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.157507  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.657043  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:30.099244  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:30.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.157941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.158210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:30.158254  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:30.164980  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:30.165032  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.165052  521964 retry.go:31] will retry after 3.932491359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.657621  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.657701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.657964  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.157809  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.657377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.157020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.656981  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:32.657449  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:33.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:33.657102  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.657175  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.097811  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:34.156372  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:34.156417  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.156437  521964 retry.go:31] will retry after 10.974576666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.157589  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.157652  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.157912  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.657701  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.657780  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.658097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:34.658164  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:35.157826  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.157905  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.158165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:35.656910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.656988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.157319  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.157409  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.657573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.657912  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:36.658034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.730483  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:36.730533  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:36.730554  521964 retry.go:31] will retry after 6.063500375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:37.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.157097  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:37.157505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:37.657206  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.657296  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.657631  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.157704  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.157772  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.158095  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.657966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.658289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.156875  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.156971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.157322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.657329  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:39.657378  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:40.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:40.657085  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.657161  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.157198  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.157267  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.657708  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.658115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:41.658168  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:42.157124  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.157211  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.157646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.657398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.794843  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:42.853617  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:42.853659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:42.853680  521964 retry.go:31] will retry after 14.65335173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:43.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:43.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:44.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:44.157384  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:44.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.131211  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:45.157806  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.157891  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.221334  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:45.221384  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.221409  521964 retry.go:31] will retry after 11.551495399s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:46.157214  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.157292  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.157581  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:46.157642  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:46.657575  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.657647  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.657977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.157285  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.157350  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.157647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.156986  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.657048  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.657118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:48.657502  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:49.157019  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.157102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.157404  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:49.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.657208  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.657513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.156941  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.157013  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.157268  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.657077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.657401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:51.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:51.157620  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:51.657409  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.657480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.657812  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.157619  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.157701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.158034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.657819  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.657897  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.658222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:53.157452  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.157532  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.157789  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:53.157829  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:53.657659  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.657737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.658067  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.157887  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.157963  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.158311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.656941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.657207  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.156998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.656937  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:55.657445  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:56.157203  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.157283  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.157556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.657510  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.657589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.657925  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.773160  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:56.828599  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:56.831983  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:56.832017  521964 retry.go:31] will retry after 19.593958555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.157556  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.157632  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.157962  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:57.507290  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:57.561691  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:57.565020  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.565054  521964 retry.go:31] will retry after 13.393925675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.657318  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.657711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:57.657760  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:58.157573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.157646  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.157951  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:58.657731  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.657806  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.658143  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.157844  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.158113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.657909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.657992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.658327  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:59.658388  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:00.157067  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.157155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:00.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.656981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.657427  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:02.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.157192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.157450  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:02.157491  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:02.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.156950  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.657724  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.658043  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:04.157851  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.157926  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:04.158353  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:04.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.656984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.657308  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.159258  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.159335  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.159644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.157419  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.157493  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.157828  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.657668  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.657743  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.658026  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:06.658074  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:07.157783  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.157860  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.158171  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:07.656931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.657012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.657345  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.157032  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.157106  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.157464  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.657254  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:09.157296  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.157697  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:09.157750  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:09.657059  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.156962  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.157037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.157365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.656967  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.657051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.960044  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:11.016321  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:11.019785  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.019824  521964 retry.go:31] will retry after 44.695855679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.156928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.157315  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:11.657003  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:11.657463  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:12.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.157770  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:12.657058  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.657388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.657169  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.657467  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:13.657512  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:14.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.157012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:14.657025  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.657098  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.157163  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.157273  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.657300  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.657393  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:15.657762  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:16.157618  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.158073  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:16.426568  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:16.504541  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:16.504580  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.504599  521964 retry.go:31] will retry after 41.569353087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.657931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.658002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.658310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.156879  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.156968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.157222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.657405  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:18.157142  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.157229  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.157610  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:18.157665  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:18.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.657865  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.658174  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.156967  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.157284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.657096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.657452  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.657458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:20.657526  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:21.157000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:21.656883  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.656968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.657320  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.157049  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.157135  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.157505  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.657283  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.657387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.657820  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:22.657893  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:23.157642  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.157715  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.157983  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:23.657627  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.657716  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.658152  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.157478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.657185  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.657275  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.657653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:25.157527  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.157631  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.158006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:25.158072  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:25.657861  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.157315  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.157387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.157664  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.657761  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.657845  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.658250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.657204  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.657277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:27.657627  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:28.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.157095  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.157476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:28.657072  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.657162  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.657537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.157417  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.157501  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.157799  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.657718  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.657811  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.658220  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:29.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:30.156978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.157057  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:30.656889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.656971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.657275  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.157026  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.157118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:32.157753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.157835  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.158232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:32.158291  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:32.657000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.657475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.157220  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.157305  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.157692  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.657487  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.157729  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.157800  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.656912  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:34.657482  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:35.157152  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.157546  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:35.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.157282  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.157367  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.157727  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.657599  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.657686  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.657988  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:36.658045  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:37.157802  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.157896  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.158276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:37.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.657119  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.157842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.158130  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.657916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.657997  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.658359  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:38.658421  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:39.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:39.657230  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.657317  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.657685  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.157525  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.157997  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.657880  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.657968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.658348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:41.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.157382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:41.157447  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:41.657680  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.657767  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.658134  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.157525  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.657312  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:43.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.157479  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:43.157548  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:43.657235  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.657325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.657683  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.157581  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.158002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.657915  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.658331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:45.157080  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:45.157719  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:45.656935  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.657016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.657311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.157385  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.157475  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.157855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.657753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.657842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:47.157536  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.157944  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:47.157998  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:47.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.657826  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.658196  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.157876  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.157958  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.158348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.657375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.657287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.657715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:49.657793  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:50.157561  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.157644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.157981  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:50.657775  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.658229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.156948  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.656916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.656999  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.657330  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:52.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.157094  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:52.157551  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:52.657260  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.657345  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.157505  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.157589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.157948  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.657814  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.657901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.658274  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.157033  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.157120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.157494  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.657829  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.658226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:54.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:55.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.657040  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.657127  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.716783  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:55.791498  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795332  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795559  521964 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:56.157158  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.157619  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:56.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.658038  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:57.157909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.157989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.158351  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:57.158413  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:57.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.656992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.074174  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:58.149106  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149168  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149265  521964 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:58.152649  521964 out.go:179] * Enabled addons: 
	I1201 21:08:58.156383  521964 addons.go:530] duration metric: took 1m42.00648536s for enable addons: enabled=[]
	I1201 21:08:58.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.157352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.157737  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.657670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.658025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.157338  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.157435  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.658051  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:59.658126  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:00.157924  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.158055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.158429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:00.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.157113  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.157519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.657045  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.657523  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:02.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.157730  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:02.157812  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:02.657697  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.658264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.157016  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.157506  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.656940  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.657317  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.157621  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.657376  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.657464  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.657841  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:04.657911  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:05.157626  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.158028  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:05.657928  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.658022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.658411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.157283  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.157384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.157756  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.657421  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.657507  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.657800  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:07.157695  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.157786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.158194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:07.158265  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:07.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.657425  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.157836  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.158191  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.657104  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.657023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.657120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:09.657606  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:10.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.157086  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.157484  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:10.657232  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.657327  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.657688  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.157620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.157927  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.656987  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:12.157102  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.157196  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:12.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:12.657123  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.657203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.157438  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.657049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:14.157820  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.158213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:14.158267  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.157262  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.657581  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.657709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.658011  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.157709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.657457  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.657635  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.658136  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:16.658210  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:17.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.157017  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.157412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:17.657169  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.657255  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.657728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.157890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.158292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:19.157017  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.157103  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:19.157588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:19.657290  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.657384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.657811  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.157631  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.157730  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.158033  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.657806  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.657889  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.658276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.157070  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.157465  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.657335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:21.657390  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:22.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.157477  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:22.657014  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.657111  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.657539  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.157195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.657519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:23.657588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:24.157112  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.157201  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.157599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:24.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.657673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.657225  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.657322  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:25.657784  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:26.157490  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.157896  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:26.657062  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.657152  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.656936  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.657384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:28.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.157101  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.157533  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:28.157613  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:28.657356  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.657444  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.657855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.157718  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.158017  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.657847  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.658379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:30.157140  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.157673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:30.157765  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:30.657527  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.657947  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.157843  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.157942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.158394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.657184  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.657662  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:32.157380  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.157463  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.157761  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:32.157813  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:32.657593  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.657683  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.658044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.157900  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.157992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.158384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.656918  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.657277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.156983  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:34.657466  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:35.157073  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.157156  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:35.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.657088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.157396  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.157480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.157836  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.657834  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:36.658171  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:37.156863  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.156942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.157295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:37.657055  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.657144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.657495  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.156908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.157238  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.657402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:39.157119  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.157202  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.157574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:39.157635  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:39.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.656951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.156899  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.157303  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.656905  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.656985  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.657322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:41.157534  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.157609  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:41.157915  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:41.657857  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.658297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.157048  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.157140  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.157537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.657274  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.657353  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.657634  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.157360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:43.657439  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:44.157645  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.157713  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.157985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:44.657826  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.657923  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.658392  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.157027  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.157125  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.157611  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.656842  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.656917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.657187  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:46.157288  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.157362  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.157699  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:46.157757  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:46.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.657642  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.658013  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.157757  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.158112  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.657894  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.657972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.157083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.657654  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.657937  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:48.657979  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:49.157706  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.157785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:49.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.657921  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.658333  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.156929  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.157000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.157277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:51.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.157528  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:51.157583  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:51.656908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.656978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.657247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.157355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.657082  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.657488  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.157030  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.157430  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.656984  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.657399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:53.657456  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:54.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:54.657665  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.657741  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.658010  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:56.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.157246  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.157570  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:56.157631  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:56.657418  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.657498  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.657830  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.157641  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.157734  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.158097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.657841  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.657910  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.156868  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.156944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:58.657513  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:59.157748  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.157815  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.158119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:59.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.656934  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.657255  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.182510  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.182611  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.182943  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.657771  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.657850  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.658154  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:00.658206  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:01.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.156992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:01.657214  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.657298  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.157865  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.157946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.158249  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.656955  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.657029  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:03.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.157411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:03.157464  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:03.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.657085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.657453  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.657224  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.657551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:05.159263  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.159342  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.159636  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:05.159683  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:05.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.157539  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.157637  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.158058  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.657526  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.657604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.657867  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.157646  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.157727  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.158042  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.657854  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.657935  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.658292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:07.658351  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:08.157603  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.157674  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.157973  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:08.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.657862  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.658197  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.156973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.656947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.657210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:10.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.157076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.157429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:10.157492  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:10.657080  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.657192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.157228  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.657517  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.657597  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:12.157792  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.157864  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:12.158240  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:12.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.656959  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.157415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.657121  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.657199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.657550  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:14.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.157913  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.158250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:14.158314  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.157065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.157428  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.656989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.657251  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.157705  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.657618  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.657700  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:16.658091  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:17.157765  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.157836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:17.657888  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.657971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.658355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.657112  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:19.156976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:19.157452  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:19.657118  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.657191  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.657516  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.157379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.656945  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.657020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.657391  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:21.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.157552  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:21.157608  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:21.657312  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.657400  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.657677  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.156963  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.657368  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.156906  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.157247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.657411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:23.657467  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:24.157128  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.157203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:24.657808  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.657883  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.658178  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.156896  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.156988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.657068  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.657155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:25.657581  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:26.157344  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.157430  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.157711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:26.657676  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.657747  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.658068  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.157849  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.157936  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.158262  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:28.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.156978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.157356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:28.157423  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:28.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.157277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.157661  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.657507  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.657974  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:30.157860  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.157951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.158382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:30.158453  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:30.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.656991  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.157077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.657398  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.657481  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.157604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.157880  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.657746  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.657828  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.658176  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:32.658229  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:33.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.157018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:33.657643  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.657710  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.658006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.157894  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.158278  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.657059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:35.157082  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.157199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:35.157521  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:35.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.657353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.157368  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.157452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.157808  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.657277  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.657352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.657623  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.156972  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.157053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.656998  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.657079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.657415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:37.657471  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:38.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.157242  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:38.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.657036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.157041  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.657723  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.657992  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:39.658033  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:40.157791  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.157881  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.158267  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:40.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.157040  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.157114  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.157371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.657289  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.657371  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.657729  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:42.157592  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.157681  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:42.158193  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:42.657466  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.657542  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.657815  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.157576  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.157658  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.158000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.657674  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.657745  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.658086  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.157304  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.157391  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.657534  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.657625  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.657958  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:44.658013  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:45.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.157928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.158336  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:45.657663  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.657751  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.658031  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.157548  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.157629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.157950  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.657877  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.657952  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.658291  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:46.658347  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:47.156857  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.156933  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.157198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:47.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.157015  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.157423  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.657541  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.657618  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.657936  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:49.157607  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.157694  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.158025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:49.158076  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:49.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.658194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.157521  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.157593  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.157864  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.657707  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.658124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:51.157805  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.157886  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:51.158279  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:51.657127  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.657207  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.657471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.157004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.157305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.656968  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.657379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.156947  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.157022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.157288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.657360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:53.657416  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:54.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.157189  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.157007  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.657242  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.657323  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.657660  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:55.657717  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:56.157590  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.157668  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.157942  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:56.657918  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.657994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.658356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.157377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.657638  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.657712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.657982  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:57.658023  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:58.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.158147  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:58.656879  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.656954  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.157246  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:00.157201  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.157287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:00.157684  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:00.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.658231  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.157426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.156872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.156950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.157232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.656970  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:02.657392  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:03.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:03.656873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.656949  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.657257  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.657086  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.657170  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.657515  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:04.657568  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:05.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.157855  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.158116  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:05.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.657976  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.658256  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.157325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.157672  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.657576  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:06.657957  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:07.157699  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.157770  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.158064  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:07.657781  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.658224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.157367  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.157437  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.657968  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:08.658028  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:09.157829  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.157911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.158288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:09.656917  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.657288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.156991  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.657170  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.657248  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.657599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:11.156833  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.156912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.157200  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:11.157249  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:11.656972  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.657556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.157243  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.157318  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.157669  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.657823  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.657911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.658208  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:13.156933  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.157369  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:13.157434  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:13.657105  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.657190  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.657535  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.157809  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.157875  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.158149  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.657913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.658000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:15.156989  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:15.157479  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:15.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.657004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.657310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.157234  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.157328  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.657344  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.657439  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.657980  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:17.157136  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.157223  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.157592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:17.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:17.657532  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.657620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.657985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.157793  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.157869  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.657332  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.657414  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.657739  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:19.157633  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.157712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.158075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:19.158138  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:19.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.656944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.157129  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.157538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.657069  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.157653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.657489  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.657579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.657887  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:21.657951  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:22.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.157807  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.158188  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:22.656943  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.157143  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.157413  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:24.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.157227  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:24.157604  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:24.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.658165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.657269  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.657598  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:26.157266  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.157339  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.157618  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:26.157661  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:26.657561  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.657639  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.658002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.157818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.157901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.158277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.657008  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.657338  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.157024  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.157108  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.157462  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.657032  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.657112  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:28.657505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:29.157808  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:29.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.157157  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.657451  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.657748  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:30.657794  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:31.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.157692  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.158099  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:31.657089  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.657530  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:33.157120  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:33.157650  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:33.656925  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.657282  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.157085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.657236  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.657650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.156987  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.157331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.657385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:35.657436  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:36.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.157365  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.157713  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.657874  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.658213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.156873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.156946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.656921  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:38.157094  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:38.157537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:38.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.657414  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.157117  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.157513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.656888  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.157358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.657069  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.657148  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:40.657538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:41.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.156983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.157301  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:41.657216  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.657295  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.657644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.157003  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.157475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.657872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.658284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:42.658338  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:43.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.157034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.157374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:43.657103  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.657182  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.156866  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.156937  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.157219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.657376  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:45.157037  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.157482  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:45.157545  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:45.657188  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.657259  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.157054  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.157131  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.157180  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.657093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:47.657462  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:48.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:48.657126  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.657197  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.657487  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.657346  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:50.156927  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.157276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:50.157327  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:50.657022  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:52.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:52.157465  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:52.657158  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.657238  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.156907  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.157259  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.657409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.157400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:54.657346  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:55.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:55.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.657357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.157331  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.157412  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.657721  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:56.658204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:57.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:57.657664  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.657735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.157786  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.157861  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.657007  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.657100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:59.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.157823  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.158141  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:59.158186  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:59.656847  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.656927  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.657290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.157062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.657065  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.657419  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.157080  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.157418  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.657452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:01.657861  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:02.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:02.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.657050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.157177  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.157545  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.657864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.658290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:03.658354  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:04.157043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.157122  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.157481  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:04.657071  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.657150  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.157762  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.158111  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.657870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.658003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.658357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:05.658411  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:06.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.157261  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.157642  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:06.657501  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.657577  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.657845  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.157682  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.157766  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.656894  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.656972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:08.157028  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:08.157437  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:08.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.157160  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.157245  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.657243  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.156932  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.657029  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:10.657537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:11.157239  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.157313  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.157609  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:11.657334  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.657410  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.657733  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.157529  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.157603  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.157977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.657303  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.657379  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.657647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:12.657692  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:13.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.157445  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:13.657161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.657236  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.657560  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.157233  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:15.157135  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.157216  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:15.157629  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:15.657856  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.657928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.658198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.157210  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.157294  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.657580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:17.157664  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.157737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.158007  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:17.158051  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:17.657817  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.657893  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.658321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.157126  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.157218  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.657309  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.657377  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.657641  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.157459  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.157533  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.657700  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.657774  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.658113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:19.658170  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:20.157420  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.157499  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.157831  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:20.657717  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.657790  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.658137  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.156870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.156955  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.157335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.656896  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.656973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.657240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:22.156959  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.157337  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:22.157382  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:22.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.657035  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.657334  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.157240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.657321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:24.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.157353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:24.157404  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:24.657661  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.657744  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.658139  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.156898  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.657004  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.657473  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:26.157364  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.157445  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:26.157767  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:26.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.657820  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.157901  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.157983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.158328  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.657232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.156968  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.157396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.657122  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.657193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.657567  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:28.657618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:29.157156  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.157234  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:29.656952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.156982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.157060  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.657692  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.657762  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.658041  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:30.658082  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:31.157866  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.157947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.158324  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:31.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.157144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.656944  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:33.156964  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.157045  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.157424  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:33.157484  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:33.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.657209  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.157049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.157398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.657117  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.657200  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:35.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.158226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:35.158268  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:35.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.157253  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.157329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.157665  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.657154  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.657221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.657490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.157161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.157578  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.657242  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.657583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:37.657637  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:38.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.156993  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.157311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.157541  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.657246  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.657614  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:40.157008  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.157402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:40.157459  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:40.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.156917  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.157011  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.157297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:42.157169  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.157262  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.157666  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:42.157723  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:42.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.656961  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.156956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.157047  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.657015  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.157261  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.657068  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.657431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:44.657488  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:45.157013  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.157431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:45.657107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.657476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.157495  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.157580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.157930  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.656884  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:47.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.157100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:47.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:47.656956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.157373  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.657325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:49.157039  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.157480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:49.157538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.657039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.657352  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.156960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.157229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.656950  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:51.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:51.157618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:51.657566  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.657641  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.157799  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.157888  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.158264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.657426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:53.157683  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.157769  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.158044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:53.158097  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:53.657845  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.657932  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.156954  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.157044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.657370  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.157133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.157212  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.657404  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.657768  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:55.657823  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:56.157456  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.157537  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.157827  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:56.657750  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.657836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.658210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.657457  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:58.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.157072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:58.157532  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:58.657043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.657124  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.156864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.156938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.157199  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.656974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.657286  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:00.157057  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.157147  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:00.157569  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:00.657428  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.657504  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.157663  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.157764  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.158124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:02.157714  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.157793  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.158080  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:02.158125  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:02.657871  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.658316  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.156973  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.157183  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.657241  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.657321  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.657639  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:04.657698  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:05.156921  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.157001  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.157325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:05.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.657437  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.157391  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.157477  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.157856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.657298  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.657378  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.657684  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:06.657732  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:07.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.157929  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:07.657804  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.658219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.157597  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.157669  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.157933  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.657711  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.657785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.658162  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:08.658217  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:09.156936  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.157375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:09.657620  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.657765  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.157874  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.157960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.158354  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.656946  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.657358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:11.157610  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.157697  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.157986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:11.158031  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:11.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.657296  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.157004  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.657749  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.658023  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:13.157872  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.158289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:13.158341  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:13.656969  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.156916  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.156994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.157319  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.656957  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.657034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.657371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.157084  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.157470  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.656852  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.656945  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:15.657269  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:16.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.157728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:16.657690  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.657781  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.658180  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:17.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:13:17.157257  521964 node_ready.go:38] duration metric: took 6m0.000516111s for node "functional-198694" to be "Ready" ...
	I1201 21:13:17.164775  521964 out.go:203] 
	W1201 21:13:17.167674  521964 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1201 21:13:17.167697  521964 out.go:285] * 
	* 
	W1201 21:13:17.169852  521964 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:13:17.172668  521964 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-198694 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.64536883s for "functional-198694" cluster.
I1201 21:13:17.843599  486002 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (390.217359ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 logs -n 25: (1.072398454s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/486002.pem                                                                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /usr/share/ca-certificates/486002.pem                                                                                      │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls                                                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image save kicbase/echo-server:functional-074555 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/4860022.pem                                                                                                 │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image rm kicbase/echo-server:functional-074555 --alsologtostderr                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /usr/share/ca-certificates/4860022.pem                                                                                     │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls                                                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image save --daemon kicbase/echo-server:functional-074555 --alsologtostderr                                                             │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ update-context │ functional-074555 update-context --alsologtostderr -v=2                                                                                                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ update-context │ functional-074555 update-context --alsologtostderr -v=2                                                                                                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ update-context │ functional-074555 update-context --alsologtostderr -v=2                                                                                                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls --format short --alsologtostderr                                                                                               │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls --format yaml --alsologtostderr                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh pgrep buildkitd                                                                                                                     │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ image          │ functional-074555 image ls --format json --alsologtostderr                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls --format table --alsologtostderr                                                                                               │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr                                                    │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls                                                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ delete         │ -p functional-074555                                                                                                                                      │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ start          │ -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ start          │ -p functional-198694 --alsologtostderr -v=8                                                                                                               │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:07 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:07:11
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:07:11.242920  521964 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:07:11.243351  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243387  521964 out.go:374] Setting ErrFile to fd 2...
	I1201 21:07:11.243410  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243711  521964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:07:11.244177  521964 out.go:368] Setting JSON to false
	I1201 21:07:11.245066  521964 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10181,"bootTime":1764613051,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:07:11.245167  521964 start.go:143] virtualization:  
	I1201 21:07:11.248721  521964 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:07:11.252584  521964 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:07:11.252676  521964 notify.go:221] Checking for updates...
	I1201 21:07:11.258436  521964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:07:11.261368  521964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:11.264327  521964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:07:11.267307  521964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:07:11.270189  521964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:07:11.273718  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:11.273862  521964 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:07:11.298213  521964 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:07:11.298331  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.359645  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.34998497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.359790  521964 docker.go:319] overlay module found
	I1201 21:07:11.364655  521964 out.go:179] * Using the docker driver based on existing profile
	I1201 21:07:11.367463  521964 start.go:309] selected driver: docker
	I1201 21:07:11.367488  521964 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.367603  521964 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:07:11.367700  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.423386  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.414394313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.423798  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:11.423867  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:11.423916  521964 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.427203  521964 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:07:11.430063  521964 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:07:11.433025  521964 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:07:11.436022  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:11.436110  521964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:07:11.455717  521964 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:07:11.455744  521964 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:07:11.500566  521964 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:07:11.687123  521964 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:07:11.687287  521964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:07:11.687539  521964 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:07:11.687581  521964 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.687647  521964 start.go:364] duration metric: took 33.501µs to acquireMachinesLock for "functional-198694"
	I1201 21:07:11.687664  521964 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:07:11.687669  521964 fix.go:54] fixHost starting: 
	I1201 21:07:11.687932  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:11.688204  521964 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688271  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:07:11.688285  521964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.581µs
	I1201 21:07:11.688306  521964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:07:11.688318  521964 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688354  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:07:11.688367  521964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 50.575µs
	I1201 21:07:11.688373  521964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688390  521964 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688439  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:07:11.688445  521964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 57.213µs
	I1201 21:07:11.688452  521964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688467  521964 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688503  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:07:11.688513  521964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.581µs
	I1201 21:07:11.688520  521964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688529  521964 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688566  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:07:11.688576  521964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 47.712µs
	I1201 21:07:11.688582  521964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688591  521964 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688628  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:07:11.688637  521964 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 46.916µs
	I1201 21:07:11.688643  521964 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:07:11.688652  521964 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688684  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:07:11.688693  521964 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 41.952µs
	I1201 21:07:11.688698  521964 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:07:11.688707  521964 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688742  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:07:11.688749  521964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 43.527µs
	I1201 21:07:11.688755  521964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:07:11.688763  521964 cache.go:87] Successfully saved all images to host disk.
	I1201 21:07:11.706210  521964 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:07:11.706244  521964 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:07:11.709560  521964 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:07:11.709599  521964 machine.go:94] provisionDockerMachine start ...
	I1201 21:07:11.709692  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.727308  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.727671  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.727690  521964 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:07:11.874686  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:11.874711  521964 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:07:11.874786  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.892845  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.893165  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.893181  521964 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:07:12.052942  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:12.053034  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.072030  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.072356  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.072379  521964 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:07:12.227676  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:07:12.227702  521964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:07:12.227769  521964 ubuntu.go:190] setting up certificates
	I1201 21:07:12.227787  521964 provision.go:84] configureAuth start
	I1201 21:07:12.227860  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:12.247353  521964 provision.go:143] copyHostCerts
	I1201 21:07:12.247405  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247445  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:07:12.247463  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247541  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:07:12.247639  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247660  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:07:12.247665  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247698  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:07:12.247755  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247776  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:07:12.247785  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247814  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:07:12.247874  521964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:07:12.352949  521964 provision.go:177] copyRemoteCerts
	I1201 21:07:12.353031  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:07:12.353075  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.373178  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:12.479006  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1201 21:07:12.479125  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:07:12.496931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1201 21:07:12.497043  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:07:12.515649  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1201 21:07:12.515717  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 21:07:12.533930  521964 provision.go:87] duration metric: took 306.12888ms to configureAuth
	I1201 21:07:12.533957  521964 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:07:12.534156  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:12.534262  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.551972  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.552286  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.552304  521964 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:07:12.889959  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:07:12.889981  521964 machine.go:97] duration metric: took 1.180373916s to provisionDockerMachine
	I1201 21:07:12.889993  521964 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:07:12.890006  521964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:07:12.890086  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:07:12.890139  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.908762  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.018597  521964 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:07:13.022335  521964 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1201 21:07:13.022369  521964 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1201 21:07:13.022376  521964 command_runner.go:130] > VERSION_ID="12"
	I1201 21:07:13.022381  521964 command_runner.go:130] > VERSION="12 (bookworm)"
	I1201 21:07:13.022386  521964 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1201 21:07:13.022390  521964 command_runner.go:130] > ID=debian
	I1201 21:07:13.022396  521964 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1201 21:07:13.022401  521964 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1201 21:07:13.022407  521964 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1201 21:07:13.022493  521964 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:07:13.022513  521964 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:07:13.022526  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:07:13.022584  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:07:13.022685  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:07:13.022696  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /etc/ssl/certs/4860022.pem
	I1201 21:07:13.022772  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:07:13.022784  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> /etc/test/nested/copy/486002/hosts
	I1201 21:07:13.022828  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:07:13.031305  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:13.050359  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:07:13.069098  521964 start.go:296] duration metric: took 179.090292ms for postStartSetup
	I1201 21:07:13.069200  521964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:07:13.069250  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.087931  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.188150  521964 command_runner.go:130] > 18%
	I1201 21:07:13.188720  521964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:07:13.193507  521964 command_runner.go:130] > 161G
	I1201 21:07:13.195867  521964 fix.go:56] duration metric: took 1.508190835s for fixHost
	I1201 21:07:13.195933  521964 start.go:83] releasing machines lock for "functional-198694", held for 1.508273853s
	I1201 21:07:13.196019  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:13.216611  521964 ssh_runner.go:195] Run: cat /version.json
	I1201 21:07:13.216667  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.216936  521964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:07:13.216990  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.238266  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.249198  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.342561  521964 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1201 21:07:13.342766  521964 ssh_runner.go:195] Run: systemctl --version
	I1201 21:07:13.434302  521964 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1201 21:07:13.434432  521964 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1201 21:07:13.434476  521964 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1201 21:07:13.434562  521964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:07:13.473148  521964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1201 21:07:13.477954  521964 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1201 21:07:13.478007  521964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:07:13.478081  521964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:07:13.486513  521964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:07:13.486536  521964 start.go:496] detecting cgroup driver to use...
	I1201 21:07:13.486599  521964 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:07:13.486671  521964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:07:13.502588  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:07:13.515851  521964 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:07:13.515935  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:07:13.531981  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:07:13.545612  521964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:07:13.660013  521964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:07:13.783921  521964 docker.go:234] disabling docker service ...
	I1201 21:07:13.783999  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:07:13.801145  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:07:13.814790  521964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:07:13.959260  521964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:07:14.082027  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:07:14.096899  521964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:07:14.110653  521964 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1201 21:07:14.112111  521964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:07:14.112234  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.121522  521964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:07:14.121606  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.132262  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.141626  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.151111  521964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:07:14.160033  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.169622  521964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.178443  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.187976  521964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:07:14.194851  521964 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1201 21:07:14.196003  521964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:07:14.203835  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.312679  521964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:07:14.495171  521964 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:07:14.495301  521964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:07:14.499086  521964 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1201 21:07:14.499110  521964 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1201 21:07:14.499118  521964 command_runner.go:130] > Device: 0,72	Inode: 1746        Links: 1
	I1201 21:07:14.499125  521964 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:14.499150  521964 command_runner.go:130] > Access: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499176  521964 command_runner.go:130] > Modify: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499186  521964 command_runner.go:130] > Change: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499190  521964 command_runner.go:130] >  Birth: -
	I1201 21:07:14.499219  521964 start.go:564] Will wait 60s for crictl version
	I1201 21:07:14.499275  521964 ssh_runner.go:195] Run: which crictl
	I1201 21:07:14.502678  521964 command_runner.go:130] > /usr/local/bin/crictl
	I1201 21:07:14.502996  521964 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:07:14.524882  521964 command_runner.go:130] > Version:  0.1.0
	I1201 21:07:14.524906  521964 command_runner.go:130] > RuntimeName:  cri-o
	I1201 21:07:14.524912  521964 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1201 21:07:14.524918  521964 command_runner.go:130] > RuntimeApiVersion:  v1
	I1201 21:07:14.526840  521964 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:07:14.526982  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.553910  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.553933  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.553939  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.553944  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.553950  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.553971  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.553976  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.553980  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.553984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.553987  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.553991  521964 command_runner.go:130] >      static
	I1201 21:07:14.553994  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.553998  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.554001  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.554009  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.554012  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.554016  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.554020  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.554024  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.554028  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.556106  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.582720  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.582784  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.582817  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.582840  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.582863  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.582897  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.582922  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.582947  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.582984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.583008  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.583029  521964 command_runner.go:130] >      static
	I1201 21:07:14.583063  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.583085  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.583101  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.583121  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.583170  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.583196  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.583217  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.583262  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.583287  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.589911  521964 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:07:14.592808  521964 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:07:14.609405  521964 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:07:14.613461  521964 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1201 21:07:14.613638  521964 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:07:14.613753  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:14.613807  521964 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:07:14.655721  521964 command_runner.go:130] > {
	I1201 21:07:14.655745  521964 command_runner.go:130] >   "images":  [
	I1201 21:07:14.655750  521964 command_runner.go:130] >     {
	I1201 21:07:14.655758  521964 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1201 21:07:14.655763  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655768  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1201 21:07:14.655771  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655775  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655786  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1201 21:07:14.655790  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655794  521964 command_runner.go:130] >       "size":  "29035622",
	I1201 21:07:14.655798  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655803  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655811  521964 command_runner.go:130] >     },
	I1201 21:07:14.655815  521964 command_runner.go:130] >     {
	I1201 21:07:14.655825  521964 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1201 21:07:14.655839  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655846  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1201 21:07:14.655854  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655858  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655866  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1201 21:07:14.655871  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655876  521964 command_runner.go:130] >       "size":  "74488375",
	I1201 21:07:14.655880  521964 command_runner.go:130] >       "username":  "nonroot",
	I1201 21:07:14.655884  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655888  521964 command_runner.go:130] >     },
	I1201 21:07:14.655891  521964 command_runner.go:130] >     {
	I1201 21:07:14.655901  521964 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1201 21:07:14.655907  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655912  521964 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1201 21:07:14.655918  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655927  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655946  521964 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1201 21:07:14.655955  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655960  521964 command_runner.go:130] >       "size":  "60854229",
	I1201 21:07:14.655965  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.655974  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.655978  521964 command_runner.go:130] >       },
	I1201 21:07:14.655982  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655986  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655989  521964 command_runner.go:130] >     },
	I1201 21:07:14.655995  521964 command_runner.go:130] >     {
	I1201 21:07:14.656002  521964 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1201 21:07:14.656010  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656015  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1201 21:07:14.656018  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656024  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656033  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1201 21:07:14.656040  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656044  521964 command_runner.go:130] >       "size":  "84947242",
	I1201 21:07:14.656047  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656051  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656061  521964 command_runner.go:130] >       },
	I1201 21:07:14.656065  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656068  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656071  521964 command_runner.go:130] >     },
	I1201 21:07:14.656075  521964 command_runner.go:130] >     {
	I1201 21:07:14.656084  521964 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1201 21:07:14.656090  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656096  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1201 21:07:14.656100  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656106  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656115  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1201 21:07:14.656121  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656132  521964 command_runner.go:130] >       "size":  "72167568",
	I1201 21:07:14.656139  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656143  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656146  521964 command_runner.go:130] >       },
	I1201 21:07:14.656150  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656154  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656160  521964 command_runner.go:130] >     },
	I1201 21:07:14.656163  521964 command_runner.go:130] >     {
	I1201 21:07:14.656170  521964 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1201 21:07:14.656176  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656182  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1201 21:07:14.656185  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656209  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656218  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1201 21:07:14.656223  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656228  521964 command_runner.go:130] >       "size":  "74105124",
	I1201 21:07:14.656231  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656236  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656241  521964 command_runner.go:130] >     },
	I1201 21:07:14.656245  521964 command_runner.go:130] >     {
	I1201 21:07:14.656251  521964 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1201 21:07:14.656257  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656262  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1201 21:07:14.656268  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656272  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656279  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1201 21:07:14.656285  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656289  521964 command_runner.go:130] >       "size":  "49819792",
	I1201 21:07:14.656293  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656303  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656307  521964 command_runner.go:130] >       },
	I1201 21:07:14.656311  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656316  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656323  521964 command_runner.go:130] >     },
	I1201 21:07:14.656330  521964 command_runner.go:130] >     {
	I1201 21:07:14.656337  521964 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1201 21:07:14.656341  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656345  521964 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.656350  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656355  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656365  521964 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1201 21:07:14.656368  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656372  521964 command_runner.go:130] >       "size":  "517328",
	I1201 21:07:14.656378  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656383  521964 command_runner.go:130] >         "value":  "65535"
	I1201 21:07:14.656388  521964 command_runner.go:130] >       },
	I1201 21:07:14.656392  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656395  521964 command_runner.go:130] >       "pinned":  true
	I1201 21:07:14.656399  521964 command_runner.go:130] >     }
	I1201 21:07:14.656404  521964 command_runner.go:130] >   ]
	I1201 21:07:14.656408  521964 command_runner.go:130] > }
	I1201 21:07:14.656549  521964 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:07:14.656561  521964 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:07:14.656568  521964 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:07:14.656668  521964 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:07:14.656752  521964 ssh_runner.go:195] Run: crio config
	I1201 21:07:14.734869  521964 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1201 21:07:14.734915  521964 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1201 21:07:14.734928  521964 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1201 21:07:14.734945  521964 command_runner.go:130] > #
	I1201 21:07:14.734957  521964 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1201 21:07:14.734978  521964 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1201 21:07:14.734989  521964 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1201 21:07:14.735001  521964 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1201 21:07:14.735009  521964 command_runner.go:130] > # reload'.
	I1201 21:07:14.735017  521964 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1201 21:07:14.735028  521964 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1201 21:07:14.735038  521964 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1201 21:07:14.735051  521964 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1201 21:07:14.735059  521964 command_runner.go:130] > [crio]
	I1201 21:07:14.735069  521964 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1201 21:07:14.735078  521964 command_runner.go:130] > # containers images, in this directory.
	I1201 21:07:14.735108  521964 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1201 21:07:14.735125  521964 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1201 21:07:14.735149  521964 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1201 21:07:14.735158  521964 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1201 21:07:14.735167  521964 command_runner.go:130] > # imagestore = ""
	I1201 21:07:14.735180  521964 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1201 21:07:14.735200  521964 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1201 21:07:14.735401  521964 command_runner.go:130] > # storage_driver = "overlay"
	I1201 21:07:14.735416  521964 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1201 21:07:14.735422  521964 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1201 21:07:14.735427  521964 command_runner.go:130] > # storage_option = [
	I1201 21:07:14.735430  521964 command_runner.go:130] > # ]
	I1201 21:07:14.735440  521964 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1201 21:07:14.735447  521964 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1201 21:07:14.735451  521964 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1201 21:07:14.735457  521964 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1201 21:07:14.735464  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1201 21:07:14.735475  521964 command_runner.go:130] > # always happen on a node reboot
	I1201 21:07:14.735773  521964 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1201 21:07:14.735799  521964 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1201 21:07:14.735807  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1201 21:07:14.735813  521964 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1201 21:07:14.735817  521964 command_runner.go:130] > # version_file_persist = ""
	I1201 21:07:14.735825  521964 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1201 21:07:14.735839  521964 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1201 21:07:14.735844  521964 command_runner.go:130] > # internal_wipe = true
	I1201 21:07:14.735852  521964 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1201 21:07:14.735858  521964 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1201 21:07:14.735861  521964 command_runner.go:130] > # internal_repair = true
	I1201 21:07:14.735867  521964 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1201 21:07:14.735873  521964 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1201 21:07:14.735882  521964 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1201 21:07:14.735891  521964 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1201 21:07:14.735901  521964 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1201 21:07:14.735904  521964 command_runner.go:130] > [crio.api]
	I1201 21:07:14.735909  521964 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1201 21:07:14.735916  521964 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1201 21:07:14.735921  521964 command_runner.go:130] > # IP address on which the stream server will listen.
	I1201 21:07:14.735925  521964 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1201 21:07:14.735932  521964 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1201 21:07:14.735946  521964 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1201 21:07:14.735950  521964 command_runner.go:130] > # stream_port = "0"
	I1201 21:07:14.735958  521964 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1201 21:07:14.735962  521964 command_runner.go:130] > # stream_enable_tls = false
	I1201 21:07:14.735968  521964 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1201 21:07:14.735972  521964 command_runner.go:130] > # stream_idle_timeout = ""
	I1201 21:07:14.735981  521964 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1201 21:07:14.735991  521964 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1201 21:07:14.735995  521964 command_runner.go:130] > # stream_tls_cert = ""
	I1201 21:07:14.736001  521964 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1201 21:07:14.736006  521964 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1201 21:07:14.736013  521964 command_runner.go:130] > # stream_tls_key = ""
	I1201 21:07:14.736023  521964 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1201 21:07:14.736030  521964 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1201 21:07:14.736037  521964 command_runner.go:130] > # automatically pick up the changes.
	I1201 21:07:14.736045  521964 command_runner.go:130] > # stream_tls_ca = ""
	I1201 21:07:14.736072  521964 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736077  521964 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1201 21:07:14.736085  521964 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736092  521964 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1201 21:07:14.736099  521964 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1201 21:07:14.736105  521964 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1201 21:07:14.736108  521964 command_runner.go:130] > [crio.runtime]
	I1201 21:07:14.736114  521964 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1201 21:07:14.736119  521964 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1201 21:07:14.736127  521964 command_runner.go:130] > # "nofile=1024:2048"
	I1201 21:07:14.736134  521964 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1201 21:07:14.736138  521964 command_runner.go:130] > # default_ulimits = [
	I1201 21:07:14.736141  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736146  521964 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1201 21:07:14.736150  521964 command_runner.go:130] > # no_pivot = false
	I1201 21:07:14.736162  521964 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1201 21:07:14.736168  521964 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1201 21:07:14.736196  521964 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1201 21:07:14.736202  521964 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1201 21:07:14.736210  521964 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1201 21:07:14.736220  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736223  521964 command_runner.go:130] > # conmon = ""
	I1201 21:07:14.736228  521964 command_runner.go:130] > # Cgroup setting for conmon
	I1201 21:07:14.736235  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1201 21:07:14.736239  521964 command_runner.go:130] > conmon_cgroup = "pod"
	I1201 21:07:14.736257  521964 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1201 21:07:14.736262  521964 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1201 21:07:14.736269  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736273  521964 command_runner.go:130] > # conmon_env = [
	I1201 21:07:14.736276  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736281  521964 command_runner.go:130] > # Additional environment variables to set for all the
	I1201 21:07:14.736286  521964 command_runner.go:130] > # containers. These are overridden if set in the
	I1201 21:07:14.736295  521964 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1201 21:07:14.736302  521964 command_runner.go:130] > # default_env = [
	I1201 21:07:14.736308  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736314  521964 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1201 21:07:14.736322  521964 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1201 21:07:14.736328  521964 command_runner.go:130] > # selinux = false
	I1201 21:07:14.736356  521964 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1201 21:07:14.736370  521964 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1201 21:07:14.736375  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736379  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.736388  521964 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1201 21:07:14.736393  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736397  521964 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1201 21:07:14.736406  521964 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1201 21:07:14.736413  521964 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1201 21:07:14.736419  521964 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1201 21:07:14.736425  521964 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1201 21:07:14.736431  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736439  521964 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1201 21:07:14.736445  521964 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1201 21:07:14.736449  521964 command_runner.go:130] > # the cgroup blockio controller.
	I1201 21:07:14.736452  521964 command_runner.go:130] > # blockio_config_file = ""
	I1201 21:07:14.736459  521964 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1201 21:07:14.736463  521964 command_runner.go:130] > # blockio parameters.
	I1201 21:07:14.736467  521964 command_runner.go:130] > # blockio_reload = false
	I1201 21:07:14.736474  521964 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1201 21:07:14.736477  521964 command_runner.go:130] > # irqbalance daemon.
	I1201 21:07:14.736483  521964 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1201 21:07:14.736489  521964 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1201 21:07:14.736496  521964 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1201 21:07:14.736508  521964 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1201 21:07:14.736514  521964 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1201 21:07:14.736523  521964 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1201 21:07:14.736532  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736536  521964 command_runner.go:130] > # rdt_config_file = ""
	I1201 21:07:14.736545  521964 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1201 21:07:14.736550  521964 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1201 21:07:14.736555  521964 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1201 21:07:14.736560  521964 command_runner.go:130] > # separate_pull_cgroup = ""
	I1201 21:07:14.736569  521964 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1201 21:07:14.736576  521964 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1201 21:07:14.736580  521964 command_runner.go:130] > # will be added.
	I1201 21:07:14.736585  521964 command_runner.go:130] > # default_capabilities = [
	I1201 21:07:14.737078  521964 command_runner.go:130] > # 	"CHOWN",
	I1201 21:07:14.737092  521964 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1201 21:07:14.737096  521964 command_runner.go:130] > # 	"FSETID",
	I1201 21:07:14.737099  521964 command_runner.go:130] > # 	"FOWNER",
	I1201 21:07:14.737102  521964 command_runner.go:130] > # 	"SETGID",
	I1201 21:07:14.737106  521964 command_runner.go:130] > # 	"SETUID",
	I1201 21:07:14.737130  521964 command_runner.go:130] > # 	"SETPCAP",
	I1201 21:07:14.737134  521964 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1201 21:07:14.737138  521964 command_runner.go:130] > # 	"KILL",
	I1201 21:07:14.737144  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737153  521964 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1201 21:07:14.737160  521964 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1201 21:07:14.737165  521964 command_runner.go:130] > # add_inheritable_capabilities = false
	I1201 21:07:14.737171  521964 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1201 21:07:14.737189  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737193  521964 command_runner.go:130] > default_sysctls = [
	I1201 21:07:14.737198  521964 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1201 21:07:14.737200  521964 command_runner.go:130] > ]
	I1201 21:07:14.737205  521964 command_runner.go:130] > # List of devices on the host that a
	I1201 21:07:14.737212  521964 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1201 21:07:14.737215  521964 command_runner.go:130] > # allowed_devices = [
	I1201 21:07:14.737219  521964 command_runner.go:130] > # 	"/dev/fuse",
	I1201 21:07:14.737222  521964 command_runner.go:130] > # 	"/dev/net/tun",
	I1201 21:07:14.737225  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737230  521964 command_runner.go:130] > # List of additional devices. specified as
	I1201 21:07:14.737237  521964 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1201 21:07:14.737243  521964 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1201 21:07:14.737249  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737253  521964 command_runner.go:130] > # additional_devices = [
	I1201 21:07:14.737257  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737266  521964 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1201 21:07:14.737271  521964 command_runner.go:130] > # cdi_spec_dirs = [
	I1201 21:07:14.737274  521964 command_runner.go:130] > # 	"/etc/cdi",
	I1201 21:07:14.737277  521964 command_runner.go:130] > # 	"/var/run/cdi",
	I1201 21:07:14.737280  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737286  521964 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1201 21:07:14.737293  521964 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1201 21:07:14.737297  521964 command_runner.go:130] > # Defaults to false.
	I1201 21:07:14.737311  521964 command_runner.go:130] > # device_ownership_from_security_context = false
	I1201 21:07:14.737318  521964 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1201 21:07:14.737324  521964 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1201 21:07:14.737327  521964 command_runner.go:130] > # hooks_dir = [
	I1201 21:07:14.737335  521964 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1201 21:07:14.737338  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737344  521964 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1201 21:07:14.737352  521964 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1201 21:07:14.737357  521964 command_runner.go:130] > # its default mounts from the following two files:
	I1201 21:07:14.737360  521964 command_runner.go:130] > #
	I1201 21:07:14.737366  521964 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1201 21:07:14.737372  521964 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1201 21:07:14.737378  521964 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1201 21:07:14.737380  521964 command_runner.go:130] > #
	I1201 21:07:14.737386  521964 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1201 21:07:14.737393  521964 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1201 21:07:14.737399  521964 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1201 21:07:14.737407  521964 command_runner.go:130] > #      only add mounts it finds in this file.
	I1201 21:07:14.737410  521964 command_runner.go:130] > #
	I1201 21:07:14.737414  521964 command_runner.go:130] > # default_mounts_file = ""
	I1201 21:07:14.737422  521964 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1201 21:07:14.737429  521964 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1201 21:07:14.737433  521964 command_runner.go:130] > # pids_limit = -1
	I1201 21:07:14.737440  521964 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1201 21:07:14.737446  521964 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1201 21:07:14.737452  521964 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1201 21:07:14.737460  521964 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1201 21:07:14.737464  521964 command_runner.go:130] > # log_size_max = -1
	I1201 21:07:14.737472  521964 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1201 21:07:14.737476  521964 command_runner.go:130] > # log_to_journald = false
	I1201 21:07:14.737487  521964 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1201 21:07:14.737492  521964 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1201 21:07:14.737497  521964 command_runner.go:130] > # Path to directory for container attach sockets.
	I1201 21:07:14.737502  521964 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1201 21:07:14.737511  521964 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1201 21:07:14.737516  521964 command_runner.go:130] > # bind_mount_prefix = ""
	I1201 21:07:14.737521  521964 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1201 21:07:14.737528  521964 command_runner.go:130] > # read_only = false
	I1201 21:07:14.737534  521964 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1201 21:07:14.737541  521964 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1201 21:07:14.737545  521964 command_runner.go:130] > # live configuration reload.
	I1201 21:07:14.737549  521964 command_runner.go:130] > # log_level = "info"
	I1201 21:07:14.737557  521964 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1201 21:07:14.737563  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.737567  521964 command_runner.go:130] > # log_filter = ""
	I1201 21:07:14.737573  521964 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737583  521964 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1201 21:07:14.737588  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737596  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737599  521964 command_runner.go:130] > # uid_mappings = ""
	I1201 21:07:14.737606  521964 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737612  521964 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1201 21:07:14.737616  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737624  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737627  521964 command_runner.go:130] > # gid_mappings = ""
	I1201 21:07:14.737634  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1201 21:07:14.737640  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737646  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737660  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737665  521964 command_runner.go:130] > # minimum_mappable_uid = -1
	I1201 21:07:14.737674  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1201 21:07:14.737681  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737686  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737694  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737937  521964 command_runner.go:130] > # minimum_mappable_gid = -1
	I1201 21:07:14.737957  521964 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1201 21:07:14.737967  521964 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1201 21:07:14.737974  521964 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1201 21:07:14.737980  521964 command_runner.go:130] > # ctr_stop_timeout = 30
	I1201 21:07:14.737998  521964 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1201 21:07:14.738018  521964 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1201 21:07:14.738028  521964 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1201 21:07:14.738033  521964 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1201 21:07:14.738042  521964 command_runner.go:130] > # drop_infra_ctr = true
	I1201 21:07:14.738048  521964 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1201 21:07:14.738058  521964 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1201 21:07:14.738073  521964 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1201 21:07:14.738082  521964 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1201 21:07:14.738090  521964 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1201 21:07:14.738099  521964 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1201 21:07:14.738106  521964 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1201 21:07:14.738116  521964 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1201 21:07:14.738120  521964 command_runner.go:130] > # shared_cpuset = ""
	I1201 21:07:14.738130  521964 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1201 21:07:14.738139  521964 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1201 21:07:14.738154  521964 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1201 21:07:14.738162  521964 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1201 21:07:14.738167  521964 command_runner.go:130] > # pinns_path = ""
	I1201 21:07:14.738173  521964 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1201 21:07:14.738182  521964 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1201 21:07:14.738191  521964 command_runner.go:130] > # enable_criu_support = true
	I1201 21:07:14.738197  521964 command_runner.go:130] > # Enable/disable the generation of the container,
	I1201 21:07:14.738206  521964 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1201 21:07:14.738221  521964 command_runner.go:130] > # enable_pod_events = false
	I1201 21:07:14.738232  521964 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1201 21:07:14.738238  521964 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1201 21:07:14.738242  521964 command_runner.go:130] > # default_runtime = "crun"
	I1201 21:07:14.738251  521964 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1201 21:07:14.738259  521964 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1201 21:07:14.738269  521964 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1201 21:07:14.738278  521964 command_runner.go:130] > # creation as a file is not desired either.
	I1201 21:07:14.738287  521964 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1201 21:07:14.738304  521964 command_runner.go:130] > # the hostname is being managed dynamically.
	I1201 21:07:14.738322  521964 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1201 21:07:14.738329  521964 command_runner.go:130] > # ]
	I1201 21:07:14.738336  521964 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1201 21:07:14.738347  521964 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1201 21:07:14.738353  521964 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1201 21:07:14.738358  521964 command_runner.go:130] > # Each entry in the table should follow the format:
	I1201 21:07:14.738365  521964 command_runner.go:130] > #
	I1201 21:07:14.738381  521964 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1201 21:07:14.738387  521964 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1201 21:07:14.738394  521964 command_runner.go:130] > # runtime_type = "oci"
	I1201 21:07:14.738400  521964 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1201 21:07:14.738408  521964 command_runner.go:130] > # inherit_default_runtime = false
	I1201 21:07:14.738414  521964 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1201 21:07:14.738421  521964 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1201 21:07:14.738426  521964 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1201 21:07:14.738434  521964 command_runner.go:130] > # monitor_env = []
	I1201 21:07:14.738439  521964 command_runner.go:130] > # privileged_without_host_devices = false
	I1201 21:07:14.738449  521964 command_runner.go:130] > # allowed_annotations = []
	I1201 21:07:14.738459  521964 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1201 21:07:14.738463  521964 command_runner.go:130] > # no_sync_log = false
	I1201 21:07:14.738469  521964 command_runner.go:130] > # default_annotations = {}
	I1201 21:07:14.738473  521964 command_runner.go:130] > # stream_websockets = false
	I1201 21:07:14.738481  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.738515  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.738533  521964 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1201 21:07:14.738539  521964 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1201 21:07:14.738546  521964 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1201 21:07:14.738556  521964 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1201 21:07:14.738560  521964 command_runner.go:130] > #   in $PATH.
	I1201 21:07:14.738572  521964 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1201 21:07:14.738581  521964 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1201 21:07:14.738587  521964 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1201 21:07:14.738601  521964 command_runner.go:130] > #   state.
	I1201 21:07:14.738612  521964 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1201 21:07:14.738623  521964 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1201 21:07:14.738629  521964 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1201 21:07:14.738641  521964 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1201 21:07:14.738648  521964 command_runner.go:130] > #   the values from the default runtime on load time.
	I1201 21:07:14.738658  521964 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1201 21:07:14.738675  521964 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1201 21:07:14.738686  521964 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1201 21:07:14.738697  521964 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1201 21:07:14.738706  521964 command_runner.go:130] > #   The currently recognized values are:
	I1201 21:07:14.738713  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1201 21:07:14.738722  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1201 21:07:14.738731  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1201 21:07:14.738737  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1201 21:07:14.738751  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1201 21:07:14.738762  521964 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1201 21:07:14.738774  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1201 21:07:14.738785  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1201 21:07:14.738795  521964 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1201 21:07:14.738801  521964 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1201 21:07:14.738814  521964 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1201 21:07:14.738830  521964 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1201 21:07:14.738841  521964 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1201 21:07:14.738847  521964 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1201 21:07:14.738857  521964 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1201 21:07:14.738871  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1201 21:07:14.738878  521964 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1201 21:07:14.738885  521964 command_runner.go:130] > #   deprecated option "conmon".
	I1201 21:07:14.738904  521964 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1201 21:07:14.738913  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1201 21:07:14.738921  521964 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1201 21:07:14.738930  521964 command_runner.go:130] > #   should be moved to the container's cgroup
	I1201 21:07:14.738937  521964 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1201 21:07:14.738949  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1201 21:07:14.738961  521964 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1201 21:07:14.738974  521964 command_runner.go:130] > #   conmon-rs by using:
	I1201 21:07:14.738982  521964 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1201 21:07:14.738996  521964 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1201 21:07:14.739008  521964 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1201 21:07:14.739024  521964 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1201 21:07:14.739033  521964 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1201 21:07:14.739040  521964 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1201 21:07:14.739057  521964 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1201 21:07:14.739067  521964 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1201 21:07:14.739077  521964 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1201 21:07:14.739089  521964 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1201 21:07:14.739097  521964 command_runner.go:130] > #   when a machine crash happens.
	I1201 21:07:14.739105  521964 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1201 21:07:14.739117  521964 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1201 21:07:14.739152  521964 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1201 21:07:14.739158  521964 command_runner.go:130] > #   seccomp profile for the runtime.
	I1201 21:07:14.739165  521964 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1201 21:07:14.739172  521964 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1201 21:07:14.739175  521964 command_runner.go:130] > #
	I1201 21:07:14.739179  521964 command_runner.go:130] > # Using the seccomp notifier feature:
	I1201 21:07:14.739182  521964 command_runner.go:130] > #
	I1201 21:07:14.739188  521964 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1201 21:07:14.739195  521964 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1201 21:07:14.739204  521964 command_runner.go:130] > #
	I1201 21:07:14.739211  521964 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1201 21:07:14.739217  521964 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1201 21:07:14.739220  521964 command_runner.go:130] > #
	I1201 21:07:14.739225  521964 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1201 21:07:14.739228  521964 command_runner.go:130] > # feature.
	I1201 21:07:14.739231  521964 command_runner.go:130] > #
	I1201 21:07:14.739237  521964 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1201 21:07:14.739247  521964 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1201 21:07:14.739257  521964 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1201 21:07:14.739263  521964 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1201 21:07:14.739270  521964 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1201 21:07:14.739281  521964 command_runner.go:130] > #
	I1201 21:07:14.739288  521964 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1201 21:07:14.739293  521964 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1201 21:07:14.739296  521964 command_runner.go:130] > #
	I1201 21:07:14.739302  521964 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1201 21:07:14.739308  521964 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1201 21:07:14.739310  521964 command_runner.go:130] > #
	I1201 21:07:14.739316  521964 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1201 21:07:14.739322  521964 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1201 21:07:14.739325  521964 command_runner.go:130] > # limitation.
	I1201 21:07:14.739329  521964 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1201 21:07:14.739334  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1201 21:07:14.739337  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739341  521964 command_runner.go:130] > runtime_root = "/run/crun"
	I1201 21:07:14.739345  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739356  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739360  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739365  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739369  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739373  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739380  521964 command_runner.go:130] > allowed_annotations = [
	I1201 21:07:14.739384  521964 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1201 21:07:14.739391  521964 command_runner.go:130] > ]
	I1201 21:07:14.739396  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739400  521964 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1201 21:07:14.739409  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1201 21:07:14.739413  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739420  521964 command_runner.go:130] > runtime_root = "/run/runc"
	I1201 21:07:14.739434  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739442  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739450  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739455  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739459  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739465  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739470  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739481  521964 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1201 21:07:14.739490  521964 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1201 21:07:14.739507  521964 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1201 21:07:14.739519  521964 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1201 21:07:14.739534  521964 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1201 21:07:14.739546  521964 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1201 21:07:14.739559  521964 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1201 21:07:14.739569  521964 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1201 21:07:14.739589  521964 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1201 21:07:14.739601  521964 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1201 21:07:14.739616  521964 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1201 21:07:14.739627  521964 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1201 21:07:14.739635  521964 command_runner.go:130] > # Example:
	I1201 21:07:14.739639  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1201 21:07:14.739652  521964 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1201 21:07:14.739663  521964 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1201 21:07:14.739669  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1201 21:07:14.739672  521964 command_runner.go:130] > # cpuset = "0-1"
	I1201 21:07:14.739681  521964 command_runner.go:130] > # cpushares = "5"
	I1201 21:07:14.739685  521964 command_runner.go:130] > # cpuquota = "1000"
	I1201 21:07:14.739694  521964 command_runner.go:130] > # cpuperiod = "100000"
	I1201 21:07:14.739698  521964 command_runner.go:130] > # cpulimit = "35"
	I1201 21:07:14.739705  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.739709  521964 command_runner.go:130] > # The workload name is workload-type.
	I1201 21:07:14.739716  521964 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1201 21:07:14.739728  521964 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1201 21:07:14.739739  521964 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1201 21:07:14.739752  521964 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1201 21:07:14.739762  521964 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1201 21:07:14.739768  521964 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1201 21:07:14.739778  521964 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1201 21:07:14.739786  521964 command_runner.go:130] > # Default value is set to true
	I1201 21:07:14.739791  521964 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1201 21:07:14.739803  521964 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1201 21:07:14.739813  521964 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1201 21:07:14.739818  521964 command_runner.go:130] > # Default value is set to 'false'
	I1201 21:07:14.739822  521964 command_runner.go:130] > # disable_hostport_mapping = false
	I1201 21:07:14.739830  521964 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1201 21:07:14.739839  521964 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1201 21:07:14.739846  521964 command_runner.go:130] > # timezone = ""
	I1201 21:07:14.739853  521964 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1201 21:07:14.739859  521964 command_runner.go:130] > #
	I1201 21:07:14.739866  521964 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1201 21:07:14.739884  521964 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1201 21:07:14.739892  521964 command_runner.go:130] > [crio.image]
	I1201 21:07:14.739898  521964 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1201 21:07:14.739903  521964 command_runner.go:130] > # default_transport = "docker://"
	I1201 21:07:14.739913  521964 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1201 21:07:14.739919  521964 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739926  521964 command_runner.go:130] > # global_auth_file = ""
	I1201 21:07:14.739931  521964 command_runner.go:130] > # The image used to instantiate infra containers.
	I1201 21:07:14.739940  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739952  521964 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.739964  521964 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1201 21:07:14.739973  521964 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739979  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739986  521964 command_runner.go:130] > # pause_image_auth_file = ""
	I1201 21:07:14.739993  521964 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1201 21:07:14.740002  521964 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1201 21:07:14.740009  521964 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1201 21:07:14.740029  521964 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1201 21:07:14.740037  521964 command_runner.go:130] > # pause_command = "/pause"
	I1201 21:07:14.740044  521964 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1201 21:07:14.740053  521964 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1201 21:07:14.740060  521964 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1201 21:07:14.740070  521964 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1201 21:07:14.740076  521964 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1201 21:07:14.740086  521964 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1201 21:07:14.740091  521964 command_runner.go:130] > # pinned_images = [
	I1201 21:07:14.740093  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740110  521964 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1201 21:07:14.740121  521964 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1201 21:07:14.740133  521964 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1201 21:07:14.740143  521964 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1201 21:07:14.740153  521964 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1201 21:07:14.740158  521964 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1201 21:07:14.740167  521964 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1201 21:07:14.740181  521964 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1201 21:07:14.740204  521964 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1201 21:07:14.740215  521964 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1201 21:07:14.740226  521964 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1201 21:07:14.740236  521964 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1201 21:07:14.740243  521964 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1201 21:07:14.740259  521964 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1201 21:07:14.740263  521964 command_runner.go:130] > # changing them here.
	I1201 21:07:14.740273  521964 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1201 21:07:14.740278  521964 command_runner.go:130] > # insecure_registries = [
	I1201 21:07:14.740285  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740293  521964 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1201 21:07:14.740302  521964 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1201 21:07:14.740306  521964 command_runner.go:130] > # image_volumes = "mkdir"
	I1201 21:07:14.740316  521964 command_runner.go:130] > # Temporary directory to use for storing big files
	I1201 21:07:14.740321  521964 command_runner.go:130] > # big_files_temporary_dir = ""
	I1201 21:07:14.740340  521964 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1201 21:07:14.740349  521964 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1201 21:07:14.740358  521964 command_runner.go:130] > # auto_reload_registries = false
	I1201 21:07:14.740364  521964 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1201 21:07:14.740376  521964 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1201 21:07:14.740387  521964 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1201 21:07:14.740391  521964 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1201 21:07:14.740399  521964 command_runner.go:130] > # The mode of short name resolution.
	I1201 21:07:14.740415  521964 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1201 21:07:14.740423  521964 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1201 21:07:14.740428  521964 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1201 21:07:14.740436  521964 command_runner.go:130] > # short_name_mode = "enforcing"
	I1201 21:07:14.740443  521964 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1201 21:07:14.740453  521964 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1201 21:07:14.740462  521964 command_runner.go:130] > # oci_artifact_mount_support = true
	I1201 21:07:14.740469  521964 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1201 21:07:14.740484  521964 command_runner.go:130] > # CNI plugins.
	I1201 21:07:14.740492  521964 command_runner.go:130] > [crio.network]
	I1201 21:07:14.740498  521964 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1201 21:07:14.740504  521964 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1201 21:07:14.740512  521964 command_runner.go:130] > # cni_default_network = ""
	I1201 21:07:14.740519  521964 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1201 21:07:14.740530  521964 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1201 21:07:14.740540  521964 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1201 21:07:14.740549  521964 command_runner.go:130] > # plugin_dirs = [
	I1201 21:07:14.740562  521964 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1201 21:07:14.740566  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740576  521964 command_runner.go:130] > # List of included pod metrics.
	I1201 21:07:14.740580  521964 command_runner.go:130] > # included_pod_metrics = [
	I1201 21:07:14.740583  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740588  521964 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1201 21:07:14.740596  521964 command_runner.go:130] > [crio.metrics]
	I1201 21:07:14.740602  521964 command_runner.go:130] > # Globally enable or disable metrics support.
	I1201 21:07:14.740614  521964 command_runner.go:130] > # enable_metrics = false
	I1201 21:07:14.740622  521964 command_runner.go:130] > # Specify enabled metrics collectors.
	I1201 21:07:14.740637  521964 command_runner.go:130] > # Per default all metrics are enabled.
	I1201 21:07:14.740644  521964 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1201 21:07:14.740655  521964 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1201 21:07:14.740662  521964 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1201 21:07:14.740666  521964 command_runner.go:130] > # metrics_collectors = [
	I1201 21:07:14.740674  521964 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1201 21:07:14.740680  521964 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1201 21:07:14.740688  521964 command_runner.go:130] > # 	"containers_oom_total",
	I1201 21:07:14.740692  521964 command_runner.go:130] > # 	"processes_defunct",
	I1201 21:07:14.740706  521964 command_runner.go:130] > # 	"operations_total",
	I1201 21:07:14.740714  521964 command_runner.go:130] > # 	"operations_latency_seconds",
	I1201 21:07:14.740719  521964 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1201 21:07:14.740727  521964 command_runner.go:130] > # 	"operations_errors_total",
	I1201 21:07:14.740731  521964 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1201 21:07:14.740736  521964 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1201 21:07:14.740740  521964 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1201 21:07:14.740748  521964 command_runner.go:130] > # 	"image_pulls_success_total",
	I1201 21:07:14.740753  521964 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1201 21:07:14.740761  521964 command_runner.go:130] > # 	"containers_oom_count_total",
	I1201 21:07:14.740766  521964 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1201 21:07:14.740780  521964 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1201 21:07:14.740789  521964 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1201 21:07:14.740792  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740803  521964 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1201 21:07:14.740807  521964 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1201 21:07:14.740812  521964 command_runner.go:130] > # The port on which the metrics server will listen.
	I1201 21:07:14.740816  521964 command_runner.go:130] > # metrics_port = 9090
	I1201 21:07:14.740825  521964 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1201 21:07:14.740829  521964 command_runner.go:130] > # metrics_socket = ""
	I1201 21:07:14.740839  521964 command_runner.go:130] > # The certificate for the secure metrics server.
	I1201 21:07:14.740846  521964 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1201 21:07:14.740867  521964 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1201 21:07:14.740879  521964 command_runner.go:130] > # certificate on any modification event.
	I1201 21:07:14.740883  521964 command_runner.go:130] > # metrics_cert = ""
	I1201 21:07:14.740888  521964 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1201 21:07:14.740897  521964 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1201 21:07:14.740901  521964 command_runner.go:130] > # metrics_key = ""
	I1201 21:07:14.740912  521964 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1201 21:07:14.740916  521964 command_runner.go:130] > [crio.tracing]
	I1201 21:07:14.740933  521964 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1201 21:07:14.740941  521964 command_runner.go:130] > # enable_tracing = false
	I1201 21:07:14.740946  521964 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1201 21:07:14.740959  521964 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1201 21:07:14.740966  521964 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1201 21:07:14.740970  521964 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1201 21:07:14.740975  521964 command_runner.go:130] > # CRI-O NRI configuration.
	I1201 21:07:14.740982  521964 command_runner.go:130] > [crio.nri]
	I1201 21:07:14.740987  521964 command_runner.go:130] > # Globally enable or disable NRI.
	I1201 21:07:14.740993  521964 command_runner.go:130] > # enable_nri = true
	I1201 21:07:14.741004  521964 command_runner.go:130] > # NRI socket to listen on.
	I1201 21:07:14.741013  521964 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1201 21:07:14.741018  521964 command_runner.go:130] > # NRI plugin directory to use.
	I1201 21:07:14.741026  521964 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1201 21:07:14.741031  521964 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1201 21:07:14.741039  521964 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1201 21:07:14.741046  521964 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1201 21:07:14.741111  521964 command_runner.go:130] > # nri_disable_connections = false
	I1201 21:07:14.741122  521964 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1201 21:07:14.741131  521964 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1201 21:07:14.741137  521964 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1201 21:07:14.741142  521964 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1201 21:07:14.741156  521964 command_runner.go:130] > # NRI default validator configuration.
	I1201 21:07:14.741167  521964 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1201 21:07:14.741178  521964 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1201 21:07:14.741190  521964 command_runner.go:130] > # can be restricted/rejected:
	I1201 21:07:14.741198  521964 command_runner.go:130] > # - OCI hook injection
	I1201 21:07:14.741206  521964 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1201 21:07:14.741214  521964 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1201 21:07:14.741218  521964 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1201 21:07:14.741229  521964 command_runner.go:130] > # - adjustment of linux namespaces
	I1201 21:07:14.741241  521964 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1201 21:07:14.741252  521964 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1201 21:07:14.741262  521964 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1201 21:07:14.741268  521964 command_runner.go:130] > #
	I1201 21:07:14.741276  521964 command_runner.go:130] > # [crio.nri.default_validator]
	I1201 21:07:14.741281  521964 command_runner.go:130] > # nri_enable_default_validator = false
	I1201 21:07:14.741290  521964 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1201 21:07:14.741295  521964 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1201 21:07:14.741308  521964 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1201 21:07:14.741318  521964 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1201 21:07:14.741323  521964 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1201 21:07:14.741331  521964 command_runner.go:130] > # nri_validator_required_plugins = [
	I1201 21:07:14.741334  521964 command_runner.go:130] > # ]
	I1201 21:07:14.741344  521964 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1201 21:07:14.741350  521964 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1201 21:07:14.741357  521964 command_runner.go:130] > [crio.stats]
	I1201 21:07:14.741364  521964 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1201 21:07:14.741379  521964 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1201 21:07:14.741384  521964 command_runner.go:130] > # stats_collection_period = 0
	I1201 21:07:14.741390  521964 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1201 21:07:14.741400  521964 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1201 21:07:14.741409  521964 command_runner.go:130] > # collection_period = 0
	I1201 21:07:14.743695  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701489723Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1201 21:07:14.743741  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701919228Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1201 21:07:14.743753  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702192379Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1201 21:07:14.743761  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.70239116Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1201 21:07:14.743770  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702743464Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.743783  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.703251326Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1201 21:07:14.743797  521964 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1201 21:07:14.743882  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:14.743892  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:14.743907  521964 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:07:14.743929  521964 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:07:14.744055  521964 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:07:14.744124  521964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:07:14.751405  521964 command_runner.go:130] > kubeadm
	I1201 21:07:14.751425  521964 command_runner.go:130] > kubectl
	I1201 21:07:14.751429  521964 command_runner.go:130] > kubelet
	I1201 21:07:14.752384  521964 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:07:14.752448  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:07:14.760026  521964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:07:14.773137  521964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:07:14.786891  521964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1201 21:07:14.799994  521964 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:07:14.803501  521964 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1201 21:07:14.803615  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.920306  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:15.405274  521964 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:07:15.405300  521964 certs.go:195] generating shared ca certs ...
	I1201 21:07:15.405343  521964 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:15.405542  521964 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:07:15.405589  521964 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:07:15.405597  521964 certs.go:257] generating profile certs ...
	I1201 21:07:15.405726  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:07:15.405806  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:07:15.405849  521964 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:07:15.405858  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1201 21:07:15.405870  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1201 21:07:15.405880  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1201 21:07:15.405895  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1201 21:07:15.405908  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1201 21:07:15.405920  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1201 21:07:15.405931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1201 21:07:15.405941  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1201 21:07:15.406006  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:07:15.406049  521964 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:07:15.406068  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:07:15.406113  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:07:15.406137  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:07:15.406172  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:07:15.406237  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:15.406287  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem -> /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.406308  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.406325  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.407085  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:07:15.435325  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:07:15.460453  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:07:15.484820  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:07:15.503541  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:07:15.522001  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:07:15.540074  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:07:15.557935  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:07:15.576709  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:07:15.595484  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:07:15.614431  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:07:15.632609  521964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:07:15.645463  521964 ssh_runner.go:195] Run: openssl version
	I1201 21:07:15.651732  521964 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1201 21:07:15.652120  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:07:15.660522  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664099  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664137  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664196  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.704899  521964 command_runner.go:130] > 51391683
	I1201 21:07:15.705348  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:07:15.713374  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:07:15.721756  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725563  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725613  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725662  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.766341  521964 command_runner.go:130] > 3ec20f2e
	I1201 21:07:15.766756  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:07:15.774531  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:07:15.784868  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788871  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788929  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788991  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.829962  521964 command_runner.go:130] > b5213941
	I1201 21:07:15.830101  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:07:15.838399  521964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842255  521964 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842282  521964 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1201 21:07:15.842289  521964 command_runner.go:130] > Device: 259,1	Inode: 2345358     Links: 1
	I1201 21:07:15.842296  521964 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:15.842308  521964 command_runner.go:130] > Access: 2025-12-01 21:03:07.261790641 +0000
	I1201 21:07:15.842313  521964 command_runner.go:130] > Modify: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842318  521964 command_runner.go:130] > Change: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842324  521964 command_runner.go:130] >  Birth: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842405  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:07:15.883885  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.884377  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:07:15.925029  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.925488  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:07:15.967363  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.967505  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:07:16.008933  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.009470  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:07:16.052395  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.052881  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:07:16.094441  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.094868  521964 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:16.094970  521964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:07:16.095033  521964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:07:16.122671  521964 cri.go:89] found id: ""
	I1201 21:07:16.122745  521964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:07:16.129629  521964 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1201 21:07:16.129704  521964 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1201 21:07:16.129749  521964 command_runner.go:130] > /var/lib/minikube/etcd:
	I1201 21:07:16.130618  521964 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:07:16.130634  521964 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:07:16.130700  521964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:07:16.138263  521964 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:07:16.138690  521964 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-198694" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.138796  521964 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-482752/kubeconfig needs updating (will repair): [kubeconfig missing "functional-198694" cluster setting kubeconfig missing "functional-198694" context setting]
	I1201 21:07:16.139097  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.139560  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.139697  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.140229  521964 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 21:07:16.140256  521964 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 21:07:16.140265  521964 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 21:07:16.140270  521964 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 21:07:16.140285  521964 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 21:07:16.140581  521964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:07:16.140673  521964 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1201 21:07:16.148484  521964 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1201 21:07:16.148518  521964 kubeadm.go:602] duration metric: took 17.877938ms to restartPrimaryControlPlane
	I1201 21:07:16.148528  521964 kubeadm.go:403] duration metric: took 53.667619ms to StartCluster
	I1201 21:07:16.148545  521964 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.148604  521964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.149244  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.149450  521964 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 21:07:16.149837  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:16.149887  521964 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 21:07:16.149959  521964 addons.go:70] Setting storage-provisioner=true in profile "functional-198694"
	I1201 21:07:16.149971  521964 addons.go:239] Setting addon storage-provisioner=true in "functional-198694"
	I1201 21:07:16.149997  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.150469  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.150813  521964 addons.go:70] Setting default-storageclass=true in profile "functional-198694"
	I1201 21:07:16.150847  521964 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-198694"
	I1201 21:07:16.151095  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.157800  521964 out.go:179] * Verifying Kubernetes components...
	I1201 21:07:16.160495  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:16.191854  521964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 21:07:16.194709  521964 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.194728  521964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 21:07:16.194804  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.200857  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.201020  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.201620  521964 addons.go:239] Setting addon default-storageclass=true in "functional-198694"
	I1201 21:07:16.201664  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.202447  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.245603  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.261120  521964 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:16.261144  521964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 21:07:16.261216  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.294119  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.373164  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:16.408855  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.445769  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.156317  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156488  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156559  521964 retry.go:31] will retry after 323.483538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156628  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156673  521964 retry.go:31] will retry after 132.387182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156540  521964 node_ready.go:35] waiting up to 6m0s for node "functional-198694" to be "Ready" ...
	I1201 21:07:17.156859  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.156951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.289607  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.345927  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.349389  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.349423  521964 retry.go:31] will retry after 369.598465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.480797  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.537300  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.541071  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.541105  521964 retry.go:31] will retry after 250.665906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.657414  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.657490  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.657803  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.720223  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.783305  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.783341  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.783362  521964 retry.go:31] will retry after 375.003536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.792548  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.854946  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.854989  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.855009  521964 retry.go:31] will retry after 643.882626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.157670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.158003  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:18.159267  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:18.225579  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.225683  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.225726  521964 retry.go:31] will retry after 1.172405999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.500161  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:18.566908  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.566958  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.566979  521964 retry.go:31] will retry after 1.221518169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.657190  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.657601  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.157332  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.157408  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.157736  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:19.157807  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:19.398291  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:19.478299  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.478401  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.478424  521964 retry.go:31] will retry after 725.636222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.657755  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.658075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.789414  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:19.847191  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.847229  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.847250  521964 retry.go:31] will retry after 688.680113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.157514  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.157586  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.157835  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:20.205210  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:20.265409  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.265448  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.265467  521964 retry.go:31] will retry after 1.46538703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.536913  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:20.597058  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.597109  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.597130  521964 retry.go:31] will retry after 1.65793185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.657434  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.657509  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.657856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.157726  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.157805  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.158133  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:21.158204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:21.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.657048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.731621  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:21.794486  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:21.794526  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:21.794546  521964 retry.go:31] will retry after 2.907930062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:22.255851  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:22.319449  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:22.319491  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.319511  521964 retry.go:31] will retry after 2.874628227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.157294  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:23.657472  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:24.157139  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.157221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.157543  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.657245  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.657316  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.657622  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.702795  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:24.765996  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:24.766044  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:24.766064  521964 retry.go:31] will retry after 4.286350529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.157658  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.157735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.158024  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:25.194368  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:25.250297  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:25.253946  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.253992  521964 retry.go:31] will retry after 4.844090269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.657643  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.657986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:25.658042  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:26.157893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.157964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.158227  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:26.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.657225  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.657521  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.657272  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:28.156970  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.157420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:28.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:28.657148  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.657244  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.657592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.053156  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:29.109834  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:29.112973  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.113004  521964 retry.go:31] will retry after 7.544668628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.157507  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.657043  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:30.099244  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:30.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.157941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.158210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:30.158254  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:30.164980  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:30.165032  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.165052  521964 retry.go:31] will retry after 3.932491359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.657621  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.657701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.657964  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.157809  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.657377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.157020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.656981  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:32.657449  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:33.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:33.657102  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.657175  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.097811  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:34.156372  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:34.156417  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.156437  521964 retry.go:31] will retry after 10.974576666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.157589  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.157652  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.157912  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.657701  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.657780  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.658097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:34.658164  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:35.157826  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.157905  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.158165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:35.656910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.656988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.157319  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.157409  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.657573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.657912  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:36.658034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.730483  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:36.730533  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:36.730554  521964 retry.go:31] will retry after 6.063500375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:37.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.157097  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:37.157505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:37.657206  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.657296  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.657631  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.157704  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.157772  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.158095  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.657966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.658289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.156875  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.156971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.157322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.657329  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:39.657378  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:40.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:40.657085  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.657161  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.157198  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.157267  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.657708  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.658115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:41.658168  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:42.157124  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.157211  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.157646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.657398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.794843  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:42.853617  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:42.853659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:42.853680  521964 retry.go:31] will retry after 14.65335173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:43.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:43.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:44.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:44.157384  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:44.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.131211  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:45.157806  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.157891  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.221334  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:45.221384  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.221409  521964 retry.go:31] will retry after 11.551495399s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:46.157214  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.157292  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.157581  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:46.157642  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:46.657575  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.657647  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.657977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.157285  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.157350  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.157647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.156986  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.657048  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.657118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:48.657502  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:49.157019  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.157102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.157404  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:49.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.657208  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.657513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.156941  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.157013  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.157268  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.657077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.657401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:51.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:51.157620  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:51.657409  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.657480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.657812  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.157619  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.157701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.158034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.657819  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.657897  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.658222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:53.157452  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.157532  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.157789  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:53.157829  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:53.657659  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.657737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.658067  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.157887  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.157963  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.158311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.656941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.657207  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.156998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.656937  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:55.657445  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:56.157203  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.157283  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.157556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.657510  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.657589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.657925  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.773160  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:56.828599  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:56.831983  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:56.832017  521964 retry.go:31] will retry after 19.593958555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.157556  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.157632  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.157962  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:57.507290  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:57.561691  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:57.565020  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.565054  521964 retry.go:31] will retry after 13.393925675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.657318  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.657711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:57.657760  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:58.157573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.157646  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.157951  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:58.657731  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.657806  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.658143  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.157844  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.158113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.657909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.657992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.658327  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:59.658388  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:00.157067  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.157155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:00.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.656981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.657427  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:02.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.157192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.157450  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:02.157491  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:02.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.156950  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.657724  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.658043  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:04.157851  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.157926  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:04.158353  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:04.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.656984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.657308  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.159258  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.159335  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.159644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.157419  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.157493  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.157828  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.657668  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.657743  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.658026  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:06.658074  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:07.157783  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.157860  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.158171  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:07.656931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.657012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.657345  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.157032  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.157106  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.157464  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.657254  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:09.157296  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.157697  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:09.157750  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:09.657059  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.156962  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.157037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.157365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.656967  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.657051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.960044  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:11.016321  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:11.019785  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.019824  521964 retry.go:31] will retry after 44.695855679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.156928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.157315  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:11.657003  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:11.657463  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:12.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.157770  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:12.657058  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.657388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.657169  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.657467  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:13.657512  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:14.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.157012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:14.657025  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.657098  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.157163  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.157273  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.657300  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.657393  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:15.657762  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:16.157618  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.158073  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:16.426568  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:16.504541  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:16.504580  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.504599  521964 retry.go:31] will retry after 41.569353087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.657931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.658002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.658310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.156879  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.156968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.157222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.657405  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:18.157142  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.157229  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.157610  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:18.157665  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:18.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.657865  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.658174  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.156967  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.157284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.657096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.657452  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.657458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:20.657526  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:21.157000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:21.656883  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.656968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.657320  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.157049  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.157135  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.157505  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.657283  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.657387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.657820  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:22.657893  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:23.157642  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.157715  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.157983  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:23.657627  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.657716  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.658152  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.157478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.657185  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.657275  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.657653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:25.157527  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.157631  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.158006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:25.158072  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:25.657861  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.157315  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.157387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.157664  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.657761  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.657845  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.658250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.657204  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.657277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:27.657627  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:28.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.157095  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.157476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:28.657072  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.657162  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.657537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.157417  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.157501  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.157799  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.657718  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.657811  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.658220  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:29.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:30.156978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.157057  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:30.656889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.656971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.657275  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.157026  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.157118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:32.157753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.157835  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.158232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:32.158291  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:32.657000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.657475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.157220  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.157305  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.157692  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.657487  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.157729  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.157800  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.656912  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:34.657482  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:35.157152  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.157546  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:35.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.157282  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.157367  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.157727  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.657599  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.657686  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.657988  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:36.658045  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:37.157802  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.157896  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.158276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:37.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.657119  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.157842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.158130  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.657916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.657997  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.658359  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:38.658421  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:39.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:39.657230  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.657317  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.657685  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.157525  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.157997  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.657880  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.657968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.658348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:41.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.157382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:41.157447  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:41.657680  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.657767  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.658134  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.157525  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.657312  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:43.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.157479  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:43.157548  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:43.657235  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.657325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.657683  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.157581  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.158002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.657915  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.658331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:45.157080  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:45.157719  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:45.656935  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.657016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.657311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.157385  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.157475  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.157855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.657753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.657842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:47.157536  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.157944  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:47.157998  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:47.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.657826  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.658196  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.157876  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.157958  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.158348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.657375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.657287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.657715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:49.657793  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:50.157561  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.157644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.157981  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:50.657775  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.658229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.156948  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.656916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.656999  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.657330  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:52.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.157094  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:52.157551  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:52.657260  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.657345  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.157505  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.157589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.157948  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.657814  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.657901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.658274  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.157033  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.157120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.157494  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.657829  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.658226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:54.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:55.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.657040  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.657127  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.716783  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:55.791498  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795332  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795559  521964 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:56.157158  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.157619  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:56.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.658038  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:57.157909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.157989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.158351  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:57.158413  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:57.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.656992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.074174  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:58.149106  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149168  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149265  521964 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:58.152649  521964 out.go:179] * Enabled addons: 
	I1201 21:08:58.156383  521964 addons.go:530] duration metric: took 1m42.00648536s for enable addons: enabled=[]
	I1201 21:08:58.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.157352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.157737  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.657670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.658025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.157338  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.157435  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.658051  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:59.658126  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:00.157924  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.158055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.158429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:00.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.157113  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.157519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.657045  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.657523  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:02.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.157730  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:02.157812  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:02.657697  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.658264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.157016  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.157506  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.656940  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.657317  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.157621  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.657376  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.657464  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.657841  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:04.657911  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:05.157626  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.158028  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:05.657928  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.658022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.658411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.157283  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.157384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.157756  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.657421  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.657507  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.657800  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:07.157695  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.157786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.158194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:07.158265  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:07.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.657425  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.157836  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.158191  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.657104  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.657023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.657120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:09.657606  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:10.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.157086  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.157484  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:10.657232  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.657327  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.657688  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.157620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.157927  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.656987  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:12.157102  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.157196  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:12.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:12.657123  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.657203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.157438  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.657049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:14.157820  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.158213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:14.158267  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.157262  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.657581  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.657709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.658011  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.157709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.657457  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.657635  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.658136  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:16.658210  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:17.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.157017  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.157412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:17.657169  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.657255  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.657728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.157890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.158292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:19.157017  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.157103  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:19.157588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:19.657290  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.657384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.657811  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.157631  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.157730  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.158033  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.657806  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.657889  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.658276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.157070  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.157465  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.657335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:21.657390  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:22.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.157477  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:22.657014  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.657111  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.657539  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.157195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.657519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:23.657588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:24.157112  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.157201  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.157599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:24.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.657673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.657225  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.657322  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:25.657784  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:26.157490  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.157896  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:26.657062  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.657152  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.656936  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.657384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:28.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.157101  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.157533  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:28.157613  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:28.657356  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.657444  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.657855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.157718  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.158017  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.657847  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.658379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:30.157140  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.157673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:30.157765  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:30.657527  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.657947  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.157843  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.157942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.158394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.657184  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.657662  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:32.157380  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.157463  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.157761  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:32.157813  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:32.657593  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.657683  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.658044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.157900  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.157992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.158384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.656918  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.657277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.156983  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:34.657466  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:35.157073  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.157156  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:35.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.657088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.157396  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.157480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.157836  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.657834  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:36.658171  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:37.156863  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.156942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.157295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:37.657055  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.657144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.657495  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.156908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.157238  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.657402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:39.157119  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.157202  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.157574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:39.157635  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:39.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.656951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.156899  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.157303  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.656905  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.656985  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.657322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:41.157534  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.157609  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:41.157915  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:41.657857  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.658297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.157048  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.157140  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.157537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.657274  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.657353  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.657634  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.157360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:43.657439  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:44.157645  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.157713  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.157985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:44.657826  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.657923  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.658392  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.157027  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.157125  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.157611  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.656842  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.656917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.657187  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:46.157288  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.157362  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.157699  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:46.157757  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:46.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.657642  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.658013  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.157757  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.158112  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.657894  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.657972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.157083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.657654  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.657937  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:48.657979  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:49.157706  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.157785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:49.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.657921  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.658333  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.156929  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.157000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.157277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:51.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.157528  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:51.157583  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:51.656908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.656978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.657247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.157355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.657082  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.657488  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.157030  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.157430  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.656984  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.657399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:53.657456  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:54.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:54.657665  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.657741  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.658010  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:56.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.157246  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.157570  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:56.157631  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:56.657418  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.657498  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.657830  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.157641  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.157734  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.158097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.657841  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.657910  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.156868  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.156944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:58.657513  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:59.157748  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.157815  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.158119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:59.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.656934  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.657255  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.182510  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.182611  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.182943  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.657771  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.657850  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.658154  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:00.658206  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:01.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.156992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:01.657214  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.657298  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.157865  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.157946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.158249  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.656955  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.657029  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:03.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.157411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:03.157464  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:03.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.657085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.657453  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.657224  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.657551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:05.159263  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.159342  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.159636  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:05.159683  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:05.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.157539  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.157637  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.158058  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.657526  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.657604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.657867  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.157646  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.157727  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.158042  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.657854  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.657935  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.658292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:07.658351  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:08.157603  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.157674  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.157973  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:08.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.657862  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.658197  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.156973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.656947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.657210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:10.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.157076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.157429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:10.157492  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:10.657080  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.657192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.157228  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.657517  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.657597  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:12.157792  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.157864  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:12.158240  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:12.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.656959  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.157415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.657121  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.657199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.657550  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:14.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.157913  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.158250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:14.158314  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.157065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.157428  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.656989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.657251  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.157705  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.657618  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.657700  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:16.658091  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:17.157765  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.157836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:17.657888  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.657971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.658355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.657112  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:19.156976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:19.157452  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:19.657118  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.657191  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.657516  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.157379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.656945  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.657020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.657391  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:21.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.157552  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:21.157608  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:21.657312  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.657400  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.657677  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.156963  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.657368  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.156906  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.157247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.657411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:23.657467  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:24.157128  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.157203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:24.657808  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.657883  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.658178  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.156896  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.156988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.657068  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.657155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:25.657581  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:26.157344  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.157430  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.157711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:26.657676  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.657747  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.658068  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.157849  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.157936  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.158262  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:28.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.156978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.157356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:28.157423  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:28.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.157277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.157661  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.657507  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.657974  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:30.157860  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.157951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.158382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:30.158453  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:30.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.656991  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.157077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.657398  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.657481  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.157604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.157880  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.657746  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.657828  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.658176  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:32.658229  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:33.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.157018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:33.657643  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.657710  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.658006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.157894  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.158278  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.657059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:35.157082  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.157199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:35.157521  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:35.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.657353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.157368  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.157452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.157808  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.657277  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.657352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.657623  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.156972  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.157053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.656998  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.657079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.657415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:37.657471  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:38.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.157242  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:38.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.657036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.157041  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.657723  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.657992  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:39.658033  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:40.157791  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.157881  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.158267  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:40.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.157040  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.157114  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.157371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.657289  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.657371  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.657729  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:42.157592  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.157681  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:42.158193  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:42.657466  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.657542  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.657815  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.157576  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.157658  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.158000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.657674  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.657745  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.658086  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.157304  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.157391  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.657534  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.657625  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.657958  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:44.658013  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:45.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.157928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.158336  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:45.657663  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.657751  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.658031  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.157548  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.157629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.157950  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.657877  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.657952  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.658291  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:46.658347  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:47.156857  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.156933  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.157198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:47.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.157015  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.157423  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.657541  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.657618  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.657936  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:49.157607  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.157694  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.158025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:49.158076  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:49.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.658194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.157521  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.157593  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.157864  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.657707  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.658124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:51.157805  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.157886  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:51.158279  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:51.657127  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.657207  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.657471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.157004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.157305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.656968  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.657379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.156947  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.157022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.157288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.657360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:53.657416  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:54.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.157189  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.157007  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.657242  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.657323  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.657660  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:55.657717  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:56.157590  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.157668  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.157942  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:56.657918  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.657994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.658356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.157377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.657638  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.657712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.657982  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:57.658023  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:58.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.158147  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:58.656879  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.656954  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.157246  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:00.157201  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.157287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:00.157684  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:00.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.658231  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.157426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.156872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.156950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.157232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.656970  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:02.657392  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:03.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:03.656873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.656949  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.657257  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.657086  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.657170  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.657515  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:04.657568  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:05.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.157855  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.158116  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:05.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.657976  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.658256  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.157325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.157672  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.657576  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:06.657957  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:07.157699  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.157770  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.158064  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:07.657781  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.658224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.157367  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.157437  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.657968  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:08.658028  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:09.157829  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.157911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.158288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:09.656917  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.657288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.156991  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.657170  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.657248  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.657599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:11.156833  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.156912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.157200  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:11.157249  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:11.656972  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.657556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.157243  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.157318  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.157669  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.657823  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.657911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.658208  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:13.156933  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.157369  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:13.157434  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:13.657105  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.657190  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.657535  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.157809  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.157875  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.158149  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.657913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.658000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:15.156989  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:15.157479  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:15.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.657004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.657310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.157234  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.157328  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.657344  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.657439  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.657980  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:17.157136  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.157223  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.157592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:17.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:17.657532  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.657620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.657985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.157793  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.157869  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.657332  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.657414  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.657739  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:19.157633  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.157712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.158075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:19.158138  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:19.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.656944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.157129  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.157538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.657069  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.157653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.657489  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.657579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.657887  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:21.657951  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:22.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.157807  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.158188  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:22.656943  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.157143  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.157413  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:24.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.157227  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:24.157604  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:24.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.658165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.657269  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.657598  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:26.157266  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.157339  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.157618  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:26.157661  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:26.657561  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.657639  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.658002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.157818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.157901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.158277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.657008  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.657338  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.157024  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.157108  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.157462  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.657032  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.657112  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:28.657505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:29.157808  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:29.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.157157  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.657451  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.657748  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:30.657794  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:31.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.157692  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.158099  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:31.657089  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.657530  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:33.157120  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:33.157650  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:33.656925  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.657282  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.157085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.657236  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.657650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.156987  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.157331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.657385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:35.657436  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:36.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.157365  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.157713  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.657874  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.658213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.156873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.156946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.656921  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:38.157094  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:38.157537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:38.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.657414  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.157117  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.157513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.656888  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.157358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.657069  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.657148  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:40.657538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:41.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.156983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.157301  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:41.657216  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.657295  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.657644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.157003  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.157475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.657872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.658284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:42.658338  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:43.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.157034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.157374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:43.657103  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.657182  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.156866  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.156937  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.157219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.657376  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:45.157037  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.157482  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:45.157545  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:45.657188  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.657259  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.157054  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.157131  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.157180  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.657093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:47.657462  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:48.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:48.657126  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.657197  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.657487  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.657346  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:50.156927  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.157276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:50.157327  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:50.657022  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:52.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:52.157465  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:52.657158  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.657238  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.156907  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.157259  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.657409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.157400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:54.657346  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:55.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:55.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.657357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.157331  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.157412  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.657721  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:56.658204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:57.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:57.657664  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.657735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.157786  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.157861  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.657007  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.657100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:59.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.157823  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.158141  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:59.158186  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:59.656847  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.656927  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.657290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.157062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.657065  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.657419  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.157080  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.157418  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.657452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:01.657861  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:02.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:02.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.657050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.157177  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.157545  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.657864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.658290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:03.658354  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:04.157043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.157122  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.157481  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:04.657071  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.657150  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.157762  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.158111  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.657870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.658003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.658357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:05.658411  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:06.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.157261  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.157642  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:06.657501  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.657577  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.657845  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.157682  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.157766  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.656894  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.656972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:08.157028  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:08.157437  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:08.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.157160  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.157245  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.657243  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.156932  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.657029  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:10.657537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:11.157239  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.157313  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.157609  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:11.657334  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.657410  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.657733  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.157529  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.157603  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.157977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.657303  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.657379  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.657647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:12.657692  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:13.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.157445  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:13.657161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.657236  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.657560  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.157233  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:15.157135  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.157216  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:15.157629  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:15.657856  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.657928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.658198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.157210  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.157294  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.657580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:17.157664  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.157737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.158007  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:17.158051  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:17.657817  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.657893  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.658321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.157126  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.157218  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.657309  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.657377  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.657641  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.157459  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.157533  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.657700  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.657774  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.658113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:19.658170  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:20.157420  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.157499  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.157831  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:20.657717  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.657790  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.658137  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.156870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.156955  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.157335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.656896  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.656973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.657240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:22.156959  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.157337  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:22.157382  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:22.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.657035  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.657334  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.157240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.657321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:24.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.157353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:24.157404  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:24.657661  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.657744  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.658139  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.156898  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.657004  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.657473  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:26.157364  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.157445  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:26.157767  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:26.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.657820  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.157901  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.157983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.158328  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.657232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.156968  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.157396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.657122  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.657193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.657567  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:28.657618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:29.157156  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.157234  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:29.656952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.156982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.157060  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.657692  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.657762  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.658041  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:30.658082  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:31.157866  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.157947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.158324  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:31.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.157144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.656944  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:33.156964  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.157045  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.157424  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:33.157484  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:33.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.657209  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.157049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.157398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.657117  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.657200  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:35.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.158226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:35.158268  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:35.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.157253  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.157329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.157665  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.657154  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.657221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.657490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.157161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.157578  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.657242  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.657583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:37.657637  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:38.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.156993  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.157311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.157541  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.657246  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.657614  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:40.157008  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.157402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:40.157459  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:40.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.156917  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.157011  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.157297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:42.157169  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.157262  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.157666  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:42.157723  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:42.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.656961  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.156956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.157047  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.657015  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.157261  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.657068  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.657431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:44.657488  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:45.157013  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.157431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:45.657107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.657476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.157495  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.157580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.157930  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.656884  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:47.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.157100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:47.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:47.656956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.157373  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.657325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:49.157039  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.157480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:49.157538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.657039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.657352  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.156960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.157229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.656950  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:51.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:51.157618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:51.657566  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.657641  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.157799  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.157888  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.158264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.657426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:53.157683  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.157769  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.158044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:53.158097  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:53.657845  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.657932  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.156954  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.157044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.657370  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.157133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.157212  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.657404  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.657768  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:55.657823  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:56.157456  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.157537  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.157827  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:56.657750  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.657836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.658210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.657457  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:58.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.157072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:58.157532  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:58.657043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.657124  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.156864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.156938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.157199  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.656974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.657286  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:00.157057  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.157147  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:00.157569  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:00.657428  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.657504  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.157663  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.157764  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.158124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:02.157714  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.157793  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.158080  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:02.158125  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:02.657871  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.658316  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.156973  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.157183  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.657241  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.657321  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.657639  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:04.657698  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:05.156921  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.157001  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.157325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:05.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.657437  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.157391  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.157477  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.157856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.657298  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.657378  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.657684  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:06.657732  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:07.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.157929  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:07.657804  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.658219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.157597  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.157669  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.157933  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.657711  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.657785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.658162  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:08.658217  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:09.156936  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.157375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:09.657620  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.657765  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.157874  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.157960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.158354  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.656946  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.657358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:11.157610  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.157697  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.157986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:11.158031  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:11.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.657296  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.157004  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.657749  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.658023  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:13.157872  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.158289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:13.158341  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:13.656969  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.156916  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.156994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.157319  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.656957  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.657034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.657371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.157084  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.157470  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.656852  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.656945  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:15.657269  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:16.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.157728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:16.657690  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.657781  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.658180  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:17.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:13:17.157257  521964 node_ready.go:38] duration metric: took 6m0.000516111s for node "functional-198694" to be "Ready" ...
	I1201 21:13:17.164775  521964 out.go:203] 
	W1201 21:13:17.167674  521964 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1201 21:13:17.167697  521964 out.go:285] * 
	W1201 21:13:17.169852  521964 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:13:17.172668  521964 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.424023121Z" level=info msg="Using the internal default seccomp profile"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.424082033Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.424135652Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.424186474Z" level=info msg="RDT not available in the host system"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.42426561Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.425133768Z" level=info msg="Conmon does support the --sync option"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.425252149Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.425323122Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.426043787Z" level=info msg="Conmon does support the --sync option"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.426151059Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.426378968Z" level=info msg="Updated default CNI network name to "
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.427018437Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.427647075Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.4278005Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488125886Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488168388Z" level=info msg="Starting seccomp notifier watcher"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488219783Z" level=info msg="Create NRI interface"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488326505Z" level=info msg="built-in NRI default validator is disabled"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488335006Z" level=info msg="runtime interface created"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488348568Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488356355Z" level=info msg="runtime interface starting up..."
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.48836305Z" level=info msg="starting plugins..."
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488380461Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488450457Z" level=info msg="No systemd watchdog enabled"
	Dec 01 21:07:14 functional-198694 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:13:19.259679    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:19.260111    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:19.261851    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:19.262342    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:19.263823    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:13:19 up  2:55,  0 user,  load average: 0.19, 0.25, 0.58
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:13:16 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:17 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1134.
	Dec 01 21:13:17 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:17 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:17 functional-198694 kubelet[9035]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:17 functional-198694 kubelet[9035]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:17 functional-198694 kubelet[9035]: E1201 21:13:17.261968    9035 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:17 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:17 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:17 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1135.
	Dec 01 21:13:17 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:17 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:17 functional-198694 kubelet[9041]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:17 functional-198694 kubelet[9041]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:17 functional-198694 kubelet[9041]: E1201 21:13:17.990841    9041 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:17 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:17 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:18 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1136.
	Dec 01 21:13:18 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:18 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:18 functional-198694 kubelet[9061]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:18 functional-198694 kubelet[9061]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:18 functional-198694 kubelet[9061]: E1201 21:13:18.727911    9061 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:18 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:18 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (359.537604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-198694 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-198694 get po -A: exit status 1 (58.037983ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-198694 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-198694 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-198694 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (309.68193ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 logs -n 25: (1.067676343s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/486002.pem                                                                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /usr/share/ca-certificates/486002.pem                                                                                      │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls                                                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image save kicbase/echo-server:functional-074555 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/4860022.pem                                                                                                 │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image rm kicbase/echo-server:functional-074555 --alsologtostderr                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /usr/share/ca-certificates/4860022.pem                                                                                     │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls                                                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image save --daemon kicbase/echo-server:functional-074555 --alsologtostderr                                                             │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ update-context │ functional-074555 update-context --alsologtostderr -v=2                                                                                                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ update-context │ functional-074555 update-context --alsologtostderr -v=2                                                                                                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ update-context │ functional-074555 update-context --alsologtostderr -v=2                                                                                                   │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls --format short --alsologtostderr                                                                                               │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls --format yaml --alsologtostderr                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh            │ functional-074555 ssh pgrep buildkitd                                                                                                                     │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ image          │ functional-074555 image ls --format json --alsologtostderr                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls --format table --alsologtostderr                                                                                               │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr                                                    │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image          │ functional-074555 image ls                                                                                                                                │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ delete         │ -p functional-074555                                                                                                                                      │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ start          │ -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ start          │ -p functional-198694 --alsologtostderr -v=8                                                                                                               │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:07 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:07:11
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:07:11.242920  521964 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:07:11.243351  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243387  521964 out.go:374] Setting ErrFile to fd 2...
	I1201 21:07:11.243410  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243711  521964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:07:11.244177  521964 out.go:368] Setting JSON to false
	I1201 21:07:11.245066  521964 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10181,"bootTime":1764613051,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:07:11.245167  521964 start.go:143] virtualization:  
	I1201 21:07:11.248721  521964 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:07:11.252584  521964 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:07:11.252676  521964 notify.go:221] Checking for updates...
	I1201 21:07:11.258436  521964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:07:11.261368  521964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:11.264327  521964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:07:11.267307  521964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:07:11.270189  521964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:07:11.273718  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:11.273862  521964 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:07:11.298213  521964 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:07:11.298331  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.359645  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.34998497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.359790  521964 docker.go:319] overlay module found
	I1201 21:07:11.364655  521964 out.go:179] * Using the docker driver based on existing profile
	I1201 21:07:11.367463  521964 start.go:309] selected driver: docker
	I1201 21:07:11.367488  521964 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.367603  521964 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:07:11.367700  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.423386  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.414394313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.423798  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:11.423867  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:11.423916  521964 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.427203  521964 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:07:11.430063  521964 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:07:11.433025  521964 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:07:11.436022  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:11.436110  521964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:07:11.455717  521964 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:07:11.455744  521964 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:07:11.500566  521964 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:07:11.687123  521964 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:07:11.687287  521964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:07:11.687539  521964 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:07:11.687581  521964 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.687647  521964 start.go:364] duration metric: took 33.501µs to acquireMachinesLock for "functional-198694"
	I1201 21:07:11.687664  521964 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:07:11.687669  521964 fix.go:54] fixHost starting: 
	I1201 21:07:11.687932  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:11.688204  521964 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688271  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:07:11.688285  521964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.581µs
	I1201 21:07:11.688306  521964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:07:11.688318  521964 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688354  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:07:11.688367  521964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 50.575µs
	I1201 21:07:11.688373  521964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688390  521964 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688439  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:07:11.688445  521964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 57.213µs
	I1201 21:07:11.688452  521964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688467  521964 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688503  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:07:11.688513  521964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.581µs
	I1201 21:07:11.688520  521964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688529  521964 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688566  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:07:11.688576  521964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 47.712µs
	I1201 21:07:11.688582  521964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688591  521964 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688628  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:07:11.688637  521964 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 46.916µs
	I1201 21:07:11.688643  521964 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:07:11.688652  521964 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688684  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:07:11.688693  521964 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 41.952µs
	I1201 21:07:11.688698  521964 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:07:11.688707  521964 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688742  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:07:11.688749  521964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 43.527µs
	I1201 21:07:11.688755  521964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:07:11.688763  521964 cache.go:87] Successfully saved all images to host disk.
	I1201 21:07:11.706210  521964 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:07:11.706244  521964 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:07:11.709560  521964 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:07:11.709599  521964 machine.go:94] provisionDockerMachine start ...
	I1201 21:07:11.709692  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.727308  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.727671  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.727690  521964 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:07:11.874686  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:11.874711  521964 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:07:11.874786  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.892845  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.893165  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.893181  521964 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:07:12.052942  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:12.053034  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.072030  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.072356  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.072379  521964 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:07:12.227676  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:07:12.227702  521964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:07:12.227769  521964 ubuntu.go:190] setting up certificates
	I1201 21:07:12.227787  521964 provision.go:84] configureAuth start
	I1201 21:07:12.227860  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:12.247353  521964 provision.go:143] copyHostCerts
	I1201 21:07:12.247405  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247445  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:07:12.247463  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247541  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:07:12.247639  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247660  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:07:12.247665  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247698  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:07:12.247755  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247776  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:07:12.247785  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247814  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:07:12.247874  521964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:07:12.352949  521964 provision.go:177] copyRemoteCerts
	I1201 21:07:12.353031  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:07:12.353075  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.373178  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:12.479006  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1201 21:07:12.479125  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:07:12.496931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1201 21:07:12.497043  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:07:12.515649  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1201 21:07:12.515717  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 21:07:12.533930  521964 provision.go:87] duration metric: took 306.12888ms to configureAuth
	I1201 21:07:12.533957  521964 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:07:12.534156  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:12.534262  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.551972  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.552286  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.552304  521964 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:07:12.889959  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:07:12.889981  521964 machine.go:97] duration metric: took 1.180373916s to provisionDockerMachine
	I1201 21:07:12.889993  521964 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:07:12.890006  521964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:07:12.890086  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:07:12.890139  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.908762  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.018597  521964 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:07:13.022335  521964 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1201 21:07:13.022369  521964 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1201 21:07:13.022376  521964 command_runner.go:130] > VERSION_ID="12"
	I1201 21:07:13.022381  521964 command_runner.go:130] > VERSION="12 (bookworm)"
	I1201 21:07:13.022386  521964 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1201 21:07:13.022390  521964 command_runner.go:130] > ID=debian
	I1201 21:07:13.022396  521964 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1201 21:07:13.022401  521964 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1201 21:07:13.022407  521964 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1201 21:07:13.022493  521964 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:07:13.022513  521964 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:07:13.022526  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:07:13.022584  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:07:13.022685  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:07:13.022696  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /etc/ssl/certs/4860022.pem
	I1201 21:07:13.022772  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:07:13.022784  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> /etc/test/nested/copy/486002/hosts
	I1201 21:07:13.022828  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:07:13.031305  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:13.050359  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:07:13.069098  521964 start.go:296] duration metric: took 179.090292ms for postStartSetup
	I1201 21:07:13.069200  521964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:07:13.069250  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.087931  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.188150  521964 command_runner.go:130] > 18%
	I1201 21:07:13.188720  521964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:07:13.193507  521964 command_runner.go:130] > 161G
	I1201 21:07:13.195867  521964 fix.go:56] duration metric: took 1.508190835s for fixHost
	I1201 21:07:13.195933  521964 start.go:83] releasing machines lock for "functional-198694", held for 1.508273853s
	I1201 21:07:13.196019  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:13.216611  521964 ssh_runner.go:195] Run: cat /version.json
	I1201 21:07:13.216667  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.216936  521964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:07:13.216990  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.238266  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.249198  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.342561  521964 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1201 21:07:13.342766  521964 ssh_runner.go:195] Run: systemctl --version
	I1201 21:07:13.434302  521964 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1201 21:07:13.434432  521964 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1201 21:07:13.434476  521964 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1201 21:07:13.434562  521964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:07:13.473148  521964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1201 21:07:13.477954  521964 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1201 21:07:13.478007  521964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:07:13.478081  521964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:07:13.486513  521964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:07:13.486536  521964 start.go:496] detecting cgroup driver to use...
	I1201 21:07:13.486599  521964 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:07:13.486671  521964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:07:13.502588  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:07:13.515851  521964 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:07:13.515935  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:07:13.531981  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:07:13.545612  521964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:07:13.660013  521964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:07:13.783921  521964 docker.go:234] disabling docker service ...
	I1201 21:07:13.783999  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:07:13.801145  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:07:13.814790  521964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:07:13.959260  521964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:07:14.082027  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:07:14.096899  521964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:07:14.110653  521964 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1201 21:07:14.112111  521964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:07:14.112234  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.121522  521964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:07:14.121606  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.132262  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.141626  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.151111  521964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:07:14.160033  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.169622  521964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.178443  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.187976  521964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:07:14.194851  521964 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1201 21:07:14.196003  521964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:07:14.203835  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.312679  521964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:07:14.495171  521964 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:07:14.495301  521964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:07:14.499086  521964 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1201 21:07:14.499110  521964 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1201 21:07:14.499118  521964 command_runner.go:130] > Device: 0,72	Inode: 1746        Links: 1
	I1201 21:07:14.499125  521964 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:14.499150  521964 command_runner.go:130] > Access: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499176  521964 command_runner.go:130] > Modify: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499186  521964 command_runner.go:130] > Change: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499190  521964 command_runner.go:130] >  Birth: -
	I1201 21:07:14.499219  521964 start.go:564] Will wait 60s for crictl version
	I1201 21:07:14.499275  521964 ssh_runner.go:195] Run: which crictl
	I1201 21:07:14.502678  521964 command_runner.go:130] > /usr/local/bin/crictl
	I1201 21:07:14.502996  521964 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:07:14.524882  521964 command_runner.go:130] > Version:  0.1.0
	I1201 21:07:14.524906  521964 command_runner.go:130] > RuntimeName:  cri-o
	I1201 21:07:14.524912  521964 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1201 21:07:14.524918  521964 command_runner.go:130] > RuntimeApiVersion:  v1
	I1201 21:07:14.526840  521964 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:07:14.526982  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.553910  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.553933  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.553939  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.553944  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.553950  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.553971  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.553976  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.553980  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.553984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.553987  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.553991  521964 command_runner.go:130] >      static
	I1201 21:07:14.553994  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.553998  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.554001  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.554009  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.554012  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.554016  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.554020  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.554024  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.554028  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.556106  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.582720  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.582784  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.582817  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.582840  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.582863  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.582897  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.582922  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.582947  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.582984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.583008  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.583029  521964 command_runner.go:130] >      static
	I1201 21:07:14.583063  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.583085  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.583101  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.583121  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.583170  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.583196  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.583217  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.583262  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.583287  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.589911  521964 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:07:14.592808  521964 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:07:14.609405  521964 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:07:14.613461  521964 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1201 21:07:14.613638  521964 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:07:14.613753  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:14.613807  521964 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:07:14.655721  521964 command_runner.go:130] > {
	I1201 21:07:14.655745  521964 command_runner.go:130] >   "images":  [
	I1201 21:07:14.655750  521964 command_runner.go:130] >     {
	I1201 21:07:14.655758  521964 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1201 21:07:14.655763  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655768  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1201 21:07:14.655771  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655775  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655786  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1201 21:07:14.655790  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655794  521964 command_runner.go:130] >       "size":  "29035622",
	I1201 21:07:14.655798  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655803  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655811  521964 command_runner.go:130] >     },
	I1201 21:07:14.655815  521964 command_runner.go:130] >     {
	I1201 21:07:14.655825  521964 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1201 21:07:14.655839  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655846  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1201 21:07:14.655854  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655858  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655866  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1201 21:07:14.655871  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655876  521964 command_runner.go:130] >       "size":  "74488375",
	I1201 21:07:14.655880  521964 command_runner.go:130] >       "username":  "nonroot",
	I1201 21:07:14.655884  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655888  521964 command_runner.go:130] >     },
	I1201 21:07:14.655891  521964 command_runner.go:130] >     {
	I1201 21:07:14.655901  521964 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1201 21:07:14.655907  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655912  521964 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1201 21:07:14.655918  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655927  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655946  521964 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1201 21:07:14.655955  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655960  521964 command_runner.go:130] >       "size":  "60854229",
	I1201 21:07:14.655965  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.655974  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.655978  521964 command_runner.go:130] >       },
	I1201 21:07:14.655982  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655986  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655989  521964 command_runner.go:130] >     },
	I1201 21:07:14.655995  521964 command_runner.go:130] >     {
	I1201 21:07:14.656002  521964 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1201 21:07:14.656010  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656015  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1201 21:07:14.656018  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656024  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656033  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1201 21:07:14.656040  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656044  521964 command_runner.go:130] >       "size":  "84947242",
	I1201 21:07:14.656047  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656051  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656061  521964 command_runner.go:130] >       },
	I1201 21:07:14.656065  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656068  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656071  521964 command_runner.go:130] >     },
	I1201 21:07:14.656075  521964 command_runner.go:130] >     {
	I1201 21:07:14.656084  521964 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1201 21:07:14.656090  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656096  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1201 21:07:14.656100  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656106  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656115  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1201 21:07:14.656121  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656132  521964 command_runner.go:130] >       "size":  "72167568",
	I1201 21:07:14.656139  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656143  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656146  521964 command_runner.go:130] >       },
	I1201 21:07:14.656150  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656154  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656160  521964 command_runner.go:130] >     },
	I1201 21:07:14.656163  521964 command_runner.go:130] >     {
	I1201 21:07:14.656170  521964 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1201 21:07:14.656176  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656182  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1201 21:07:14.656185  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656209  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656218  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1201 21:07:14.656223  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656228  521964 command_runner.go:130] >       "size":  "74105124",
	I1201 21:07:14.656231  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656236  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656241  521964 command_runner.go:130] >     },
	I1201 21:07:14.656245  521964 command_runner.go:130] >     {
	I1201 21:07:14.656251  521964 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1201 21:07:14.656257  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656262  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1201 21:07:14.656268  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656272  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656279  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1201 21:07:14.656285  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656289  521964 command_runner.go:130] >       "size":  "49819792",
	I1201 21:07:14.656293  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656303  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656307  521964 command_runner.go:130] >       },
	I1201 21:07:14.656311  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656316  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656323  521964 command_runner.go:130] >     },
	I1201 21:07:14.656330  521964 command_runner.go:130] >     {
	I1201 21:07:14.656337  521964 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1201 21:07:14.656341  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656345  521964 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.656350  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656355  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656365  521964 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1201 21:07:14.656368  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656372  521964 command_runner.go:130] >       "size":  "517328",
	I1201 21:07:14.656378  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656383  521964 command_runner.go:130] >         "value":  "65535"
	I1201 21:07:14.656388  521964 command_runner.go:130] >       },
	I1201 21:07:14.656392  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656395  521964 command_runner.go:130] >       "pinned":  true
	I1201 21:07:14.656399  521964 command_runner.go:130] >     }
	I1201 21:07:14.656404  521964 command_runner.go:130] >   ]
	I1201 21:07:14.656408  521964 command_runner.go:130] > }
	I1201 21:07:14.656549  521964 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:07:14.656561  521964 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:07:14.656568  521964 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:07:14.656668  521964 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:07:14.656752  521964 ssh_runner.go:195] Run: crio config
	I1201 21:07:14.734869  521964 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1201 21:07:14.734915  521964 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1201 21:07:14.734928  521964 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1201 21:07:14.734945  521964 command_runner.go:130] > #
	I1201 21:07:14.734957  521964 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1201 21:07:14.734978  521964 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1201 21:07:14.734989  521964 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1201 21:07:14.735001  521964 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1201 21:07:14.735009  521964 command_runner.go:130] > # reload'.
	I1201 21:07:14.735017  521964 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1201 21:07:14.735028  521964 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1201 21:07:14.735038  521964 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1201 21:07:14.735051  521964 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1201 21:07:14.735059  521964 command_runner.go:130] > [crio]
	I1201 21:07:14.735069  521964 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1201 21:07:14.735078  521964 command_runner.go:130] > # containers images, in this directory.
	I1201 21:07:14.735108  521964 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1201 21:07:14.735125  521964 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1201 21:07:14.735149  521964 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1201 21:07:14.735158  521964 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1201 21:07:14.735167  521964 command_runner.go:130] > # imagestore = ""
	I1201 21:07:14.735180  521964 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1201 21:07:14.735200  521964 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1201 21:07:14.735401  521964 command_runner.go:130] > # storage_driver = "overlay"
	I1201 21:07:14.735416  521964 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1201 21:07:14.735422  521964 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1201 21:07:14.735427  521964 command_runner.go:130] > # storage_option = [
	I1201 21:07:14.735430  521964 command_runner.go:130] > # ]
	I1201 21:07:14.735440  521964 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1201 21:07:14.735447  521964 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1201 21:07:14.735451  521964 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1201 21:07:14.735457  521964 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1201 21:07:14.735464  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1201 21:07:14.735475  521964 command_runner.go:130] > # always happen on a node reboot
	I1201 21:07:14.735773  521964 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1201 21:07:14.735799  521964 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1201 21:07:14.735807  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1201 21:07:14.735813  521964 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1201 21:07:14.735817  521964 command_runner.go:130] > # version_file_persist = ""
	I1201 21:07:14.735825  521964 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1201 21:07:14.735839  521964 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1201 21:07:14.735844  521964 command_runner.go:130] > # internal_wipe = true
	I1201 21:07:14.735852  521964 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1201 21:07:14.735858  521964 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1201 21:07:14.735861  521964 command_runner.go:130] > # internal_repair = true
	I1201 21:07:14.735867  521964 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1201 21:07:14.735873  521964 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1201 21:07:14.735882  521964 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1201 21:07:14.735891  521964 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1201 21:07:14.735901  521964 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1201 21:07:14.735904  521964 command_runner.go:130] > [crio.api]
	I1201 21:07:14.735909  521964 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1201 21:07:14.735916  521964 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1201 21:07:14.735921  521964 command_runner.go:130] > # IP address on which the stream server will listen.
	I1201 21:07:14.735925  521964 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1201 21:07:14.735932  521964 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1201 21:07:14.735946  521964 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1201 21:07:14.735950  521964 command_runner.go:130] > # stream_port = "0"
	I1201 21:07:14.735958  521964 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1201 21:07:14.735962  521964 command_runner.go:130] > # stream_enable_tls = false
	I1201 21:07:14.735968  521964 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1201 21:07:14.735972  521964 command_runner.go:130] > # stream_idle_timeout = ""
	I1201 21:07:14.735981  521964 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1201 21:07:14.735991  521964 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1201 21:07:14.735995  521964 command_runner.go:130] > # stream_tls_cert = ""
	I1201 21:07:14.736001  521964 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1201 21:07:14.736006  521964 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1201 21:07:14.736013  521964 command_runner.go:130] > # stream_tls_key = ""
	I1201 21:07:14.736023  521964 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1201 21:07:14.736030  521964 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1201 21:07:14.736037  521964 command_runner.go:130] > # automatically pick up the changes.
	I1201 21:07:14.736045  521964 command_runner.go:130] > # stream_tls_ca = ""
	I1201 21:07:14.736072  521964 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736077  521964 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1201 21:07:14.736085  521964 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736092  521964 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1201 21:07:14.736099  521964 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1201 21:07:14.736105  521964 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1201 21:07:14.736108  521964 command_runner.go:130] > [crio.runtime]
	I1201 21:07:14.736114  521964 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1201 21:07:14.736119  521964 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1201 21:07:14.736127  521964 command_runner.go:130] > # "nofile=1024:2048"
	I1201 21:07:14.736134  521964 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1201 21:07:14.736138  521964 command_runner.go:130] > # default_ulimits = [
	I1201 21:07:14.736141  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736146  521964 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1201 21:07:14.736150  521964 command_runner.go:130] > # no_pivot = false
	I1201 21:07:14.736162  521964 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1201 21:07:14.736168  521964 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1201 21:07:14.736196  521964 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1201 21:07:14.736202  521964 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1201 21:07:14.736210  521964 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1201 21:07:14.736220  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736223  521964 command_runner.go:130] > # conmon = ""
	I1201 21:07:14.736228  521964 command_runner.go:130] > # Cgroup setting for conmon
	I1201 21:07:14.736235  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1201 21:07:14.736239  521964 command_runner.go:130] > conmon_cgroup = "pod"
	I1201 21:07:14.736257  521964 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1201 21:07:14.736262  521964 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1201 21:07:14.736269  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736273  521964 command_runner.go:130] > # conmon_env = [
	I1201 21:07:14.736276  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736281  521964 command_runner.go:130] > # Additional environment variables to set for all the
	I1201 21:07:14.736286  521964 command_runner.go:130] > # containers. These are overridden if set in the
	I1201 21:07:14.736295  521964 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1201 21:07:14.736302  521964 command_runner.go:130] > # default_env = [
	I1201 21:07:14.736308  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736314  521964 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1201 21:07:14.736322  521964 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1201 21:07:14.736328  521964 command_runner.go:130] > # selinux = false
	I1201 21:07:14.736356  521964 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1201 21:07:14.736370  521964 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1201 21:07:14.736375  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736379  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.736388  521964 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1201 21:07:14.736393  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736397  521964 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1201 21:07:14.736406  521964 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1201 21:07:14.736413  521964 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1201 21:07:14.736419  521964 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1201 21:07:14.736425  521964 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1201 21:07:14.736431  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736439  521964 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1201 21:07:14.736445  521964 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1201 21:07:14.736449  521964 command_runner.go:130] > # the cgroup blockio controller.
	I1201 21:07:14.736452  521964 command_runner.go:130] > # blockio_config_file = ""
	I1201 21:07:14.736459  521964 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1201 21:07:14.736463  521964 command_runner.go:130] > # blockio parameters.
	I1201 21:07:14.736467  521964 command_runner.go:130] > # blockio_reload = false
	I1201 21:07:14.736474  521964 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1201 21:07:14.736477  521964 command_runner.go:130] > # irqbalance daemon.
	I1201 21:07:14.736483  521964 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1201 21:07:14.736489  521964 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1201 21:07:14.736496  521964 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1201 21:07:14.736508  521964 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1201 21:07:14.736514  521964 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1201 21:07:14.736523  521964 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1201 21:07:14.736532  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736536  521964 command_runner.go:130] > # rdt_config_file = ""
	I1201 21:07:14.736545  521964 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1201 21:07:14.736550  521964 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1201 21:07:14.736555  521964 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1201 21:07:14.736560  521964 command_runner.go:130] > # separate_pull_cgroup = ""
	I1201 21:07:14.736569  521964 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1201 21:07:14.736576  521964 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1201 21:07:14.736580  521964 command_runner.go:130] > # will be added.
	I1201 21:07:14.736585  521964 command_runner.go:130] > # default_capabilities = [
	I1201 21:07:14.737078  521964 command_runner.go:130] > # 	"CHOWN",
	I1201 21:07:14.737092  521964 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1201 21:07:14.737096  521964 command_runner.go:130] > # 	"FSETID",
	I1201 21:07:14.737099  521964 command_runner.go:130] > # 	"FOWNER",
	I1201 21:07:14.737102  521964 command_runner.go:130] > # 	"SETGID",
	I1201 21:07:14.737106  521964 command_runner.go:130] > # 	"SETUID",
	I1201 21:07:14.737130  521964 command_runner.go:130] > # 	"SETPCAP",
	I1201 21:07:14.737134  521964 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1201 21:07:14.737138  521964 command_runner.go:130] > # 	"KILL",
	I1201 21:07:14.737144  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737153  521964 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1201 21:07:14.737160  521964 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1201 21:07:14.737165  521964 command_runner.go:130] > # add_inheritable_capabilities = false
	I1201 21:07:14.737171  521964 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1201 21:07:14.737189  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737193  521964 command_runner.go:130] > default_sysctls = [
	I1201 21:07:14.737198  521964 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1201 21:07:14.737200  521964 command_runner.go:130] > ]
	I1201 21:07:14.737205  521964 command_runner.go:130] > # List of devices on the host that a
	I1201 21:07:14.737212  521964 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1201 21:07:14.737215  521964 command_runner.go:130] > # allowed_devices = [
	I1201 21:07:14.737219  521964 command_runner.go:130] > # 	"/dev/fuse",
	I1201 21:07:14.737222  521964 command_runner.go:130] > # 	"/dev/net/tun",
	I1201 21:07:14.737225  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737230  521964 command_runner.go:130] > # List of additional devices. specified as
	I1201 21:07:14.737237  521964 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1201 21:07:14.737243  521964 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1201 21:07:14.737249  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737253  521964 command_runner.go:130] > # additional_devices = [
	I1201 21:07:14.737257  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737266  521964 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1201 21:07:14.737271  521964 command_runner.go:130] > # cdi_spec_dirs = [
	I1201 21:07:14.737274  521964 command_runner.go:130] > # 	"/etc/cdi",
	I1201 21:07:14.737277  521964 command_runner.go:130] > # 	"/var/run/cdi",
	I1201 21:07:14.737280  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737286  521964 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1201 21:07:14.737293  521964 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1201 21:07:14.737297  521964 command_runner.go:130] > # Defaults to false.
	I1201 21:07:14.737311  521964 command_runner.go:130] > # device_ownership_from_security_context = false
	I1201 21:07:14.737318  521964 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1201 21:07:14.737324  521964 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1201 21:07:14.737327  521964 command_runner.go:130] > # hooks_dir = [
	I1201 21:07:14.737335  521964 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1201 21:07:14.737338  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737344  521964 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1201 21:07:14.737352  521964 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1201 21:07:14.737357  521964 command_runner.go:130] > # its default mounts from the following two files:
	I1201 21:07:14.737360  521964 command_runner.go:130] > #
	I1201 21:07:14.737366  521964 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1201 21:07:14.737372  521964 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1201 21:07:14.737378  521964 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1201 21:07:14.737380  521964 command_runner.go:130] > #
	I1201 21:07:14.737386  521964 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1201 21:07:14.737393  521964 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1201 21:07:14.737399  521964 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1201 21:07:14.737407  521964 command_runner.go:130] > #      only add mounts it finds in this file.
	I1201 21:07:14.737410  521964 command_runner.go:130] > #
	I1201 21:07:14.737414  521964 command_runner.go:130] > # default_mounts_file = ""
	I1201 21:07:14.737422  521964 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1201 21:07:14.737429  521964 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1201 21:07:14.737433  521964 command_runner.go:130] > # pids_limit = -1
	I1201 21:07:14.737440  521964 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1201 21:07:14.737446  521964 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1201 21:07:14.737452  521964 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1201 21:07:14.737460  521964 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1201 21:07:14.737464  521964 command_runner.go:130] > # log_size_max = -1
	I1201 21:07:14.737472  521964 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1201 21:07:14.737476  521964 command_runner.go:130] > # log_to_journald = false
	I1201 21:07:14.737487  521964 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1201 21:07:14.737492  521964 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1201 21:07:14.737497  521964 command_runner.go:130] > # Path to directory for container attach sockets.
	I1201 21:07:14.737502  521964 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1201 21:07:14.737511  521964 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1201 21:07:14.737516  521964 command_runner.go:130] > # bind_mount_prefix = ""
	I1201 21:07:14.737521  521964 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1201 21:07:14.737528  521964 command_runner.go:130] > # read_only = false
	I1201 21:07:14.737534  521964 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1201 21:07:14.737541  521964 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1201 21:07:14.737545  521964 command_runner.go:130] > # live configuration reload.
	I1201 21:07:14.737549  521964 command_runner.go:130] > # log_level = "info"
	I1201 21:07:14.737557  521964 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1201 21:07:14.737563  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.737567  521964 command_runner.go:130] > # log_filter = ""
	I1201 21:07:14.737573  521964 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737583  521964 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1201 21:07:14.737588  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737596  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737599  521964 command_runner.go:130] > # uid_mappings = ""
	I1201 21:07:14.737606  521964 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737612  521964 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1201 21:07:14.737616  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737624  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737627  521964 command_runner.go:130] > # gid_mappings = ""
	I1201 21:07:14.737634  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1201 21:07:14.737640  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737646  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737660  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737665  521964 command_runner.go:130] > # minimum_mappable_uid = -1
	I1201 21:07:14.737674  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1201 21:07:14.737681  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737686  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737694  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737937  521964 command_runner.go:130] > # minimum_mappable_gid = -1
	I1201 21:07:14.737957  521964 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1201 21:07:14.737967  521964 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1201 21:07:14.737974  521964 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1201 21:07:14.737980  521964 command_runner.go:130] > # ctr_stop_timeout = 30
	I1201 21:07:14.737998  521964 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1201 21:07:14.738018  521964 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1201 21:07:14.738028  521964 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1201 21:07:14.738033  521964 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1201 21:07:14.738042  521964 command_runner.go:130] > # drop_infra_ctr = true
	I1201 21:07:14.738048  521964 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1201 21:07:14.738058  521964 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1201 21:07:14.738073  521964 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1201 21:07:14.738082  521964 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1201 21:07:14.738090  521964 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1201 21:07:14.738099  521964 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1201 21:07:14.738106  521964 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1201 21:07:14.738116  521964 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1201 21:07:14.738120  521964 command_runner.go:130] > # shared_cpuset = ""
	I1201 21:07:14.738130  521964 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1201 21:07:14.738139  521964 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1201 21:07:14.738154  521964 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1201 21:07:14.738162  521964 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1201 21:07:14.738167  521964 command_runner.go:130] > # pinns_path = ""
	I1201 21:07:14.738173  521964 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1201 21:07:14.738182  521964 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1201 21:07:14.738191  521964 command_runner.go:130] > # enable_criu_support = true
	I1201 21:07:14.738197  521964 command_runner.go:130] > # Enable/disable the generation of the container,
	I1201 21:07:14.738206  521964 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1201 21:07:14.738221  521964 command_runner.go:130] > # enable_pod_events = false
	I1201 21:07:14.738232  521964 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1201 21:07:14.738238  521964 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1201 21:07:14.738242  521964 command_runner.go:130] > # default_runtime = "crun"
	I1201 21:07:14.738251  521964 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1201 21:07:14.738259  521964 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1201 21:07:14.738269  521964 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1201 21:07:14.738278  521964 command_runner.go:130] > # creation as a file is not desired either.
	I1201 21:07:14.738287  521964 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1201 21:07:14.738304  521964 command_runner.go:130] > # the hostname is being managed dynamically.
	I1201 21:07:14.738322  521964 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1201 21:07:14.738329  521964 command_runner.go:130] > # ]
	I1201 21:07:14.738336  521964 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1201 21:07:14.738347  521964 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1201 21:07:14.738353  521964 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1201 21:07:14.738358  521964 command_runner.go:130] > # Each entry in the table should follow the format:
	I1201 21:07:14.738365  521964 command_runner.go:130] > #
	I1201 21:07:14.738381  521964 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1201 21:07:14.738387  521964 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1201 21:07:14.738394  521964 command_runner.go:130] > # runtime_type = "oci"
	I1201 21:07:14.738400  521964 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1201 21:07:14.738408  521964 command_runner.go:130] > # inherit_default_runtime = false
	I1201 21:07:14.738414  521964 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1201 21:07:14.738421  521964 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1201 21:07:14.738426  521964 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1201 21:07:14.738434  521964 command_runner.go:130] > # monitor_env = []
	I1201 21:07:14.738439  521964 command_runner.go:130] > # privileged_without_host_devices = false
	I1201 21:07:14.738449  521964 command_runner.go:130] > # allowed_annotations = []
	I1201 21:07:14.738459  521964 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1201 21:07:14.738463  521964 command_runner.go:130] > # no_sync_log = false
	I1201 21:07:14.738469  521964 command_runner.go:130] > # default_annotations = {}
	I1201 21:07:14.738473  521964 command_runner.go:130] > # stream_websockets = false
	I1201 21:07:14.738481  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.738515  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.738533  521964 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1201 21:07:14.738539  521964 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1201 21:07:14.738546  521964 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1201 21:07:14.738556  521964 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1201 21:07:14.738560  521964 command_runner.go:130] > #   in $PATH.
	I1201 21:07:14.738572  521964 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1201 21:07:14.738581  521964 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1201 21:07:14.738587  521964 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1201 21:07:14.738601  521964 command_runner.go:130] > #   state.
	I1201 21:07:14.738612  521964 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1201 21:07:14.738623  521964 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1201 21:07:14.738629  521964 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1201 21:07:14.738641  521964 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1201 21:07:14.738648  521964 command_runner.go:130] > #   the values from the default runtime on load time.
	I1201 21:07:14.738658  521964 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1201 21:07:14.738675  521964 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1201 21:07:14.738686  521964 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1201 21:07:14.738697  521964 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1201 21:07:14.738706  521964 command_runner.go:130] > #   The currently recognized values are:
	I1201 21:07:14.738713  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1201 21:07:14.738722  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1201 21:07:14.738731  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1201 21:07:14.738737  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1201 21:07:14.738751  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1201 21:07:14.738762  521964 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1201 21:07:14.738774  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1201 21:07:14.738785  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1201 21:07:14.738795  521964 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1201 21:07:14.738801  521964 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1201 21:07:14.738814  521964 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1201 21:07:14.738830  521964 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1201 21:07:14.738841  521964 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1201 21:07:14.738847  521964 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1201 21:07:14.738857  521964 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1201 21:07:14.738871  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1201 21:07:14.738878  521964 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1201 21:07:14.738885  521964 command_runner.go:130] > #   deprecated option "conmon".
	I1201 21:07:14.738904  521964 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1201 21:07:14.738913  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1201 21:07:14.738921  521964 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1201 21:07:14.738930  521964 command_runner.go:130] > #   should be moved to the container's cgroup
	I1201 21:07:14.738937  521964 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1201 21:07:14.738949  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1201 21:07:14.738961  521964 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1201 21:07:14.738974  521964 command_runner.go:130] > #   conmon-rs by using:
	I1201 21:07:14.738982  521964 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1201 21:07:14.738996  521964 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1201 21:07:14.739008  521964 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1201 21:07:14.739024  521964 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1201 21:07:14.739033  521964 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1201 21:07:14.739040  521964 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1201 21:07:14.739057  521964 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1201 21:07:14.739067  521964 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1201 21:07:14.739077  521964 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1201 21:07:14.739089  521964 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1201 21:07:14.739097  521964 command_runner.go:130] > #   when a machine crash happens.
	I1201 21:07:14.739105  521964 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1201 21:07:14.739117  521964 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1201 21:07:14.739152  521964 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1201 21:07:14.739158  521964 command_runner.go:130] > #   seccomp profile for the runtime.
	I1201 21:07:14.739165  521964 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1201 21:07:14.739172  521964 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1201 21:07:14.739175  521964 command_runner.go:130] > #
	I1201 21:07:14.739179  521964 command_runner.go:130] > # Using the seccomp notifier feature:
	I1201 21:07:14.739182  521964 command_runner.go:130] > #
	I1201 21:07:14.739188  521964 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1201 21:07:14.739195  521964 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1201 21:07:14.739204  521964 command_runner.go:130] > #
	I1201 21:07:14.739211  521964 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1201 21:07:14.739217  521964 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1201 21:07:14.739220  521964 command_runner.go:130] > #
	I1201 21:07:14.739225  521964 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1201 21:07:14.739228  521964 command_runner.go:130] > # feature.
	I1201 21:07:14.739231  521964 command_runner.go:130] > #
	I1201 21:07:14.739237  521964 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1201 21:07:14.739247  521964 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1201 21:07:14.739257  521964 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1201 21:07:14.739263  521964 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1201 21:07:14.739270  521964 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1201 21:07:14.739281  521964 command_runner.go:130] > #
	I1201 21:07:14.739288  521964 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1201 21:07:14.739293  521964 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1201 21:07:14.739296  521964 command_runner.go:130] > #
	I1201 21:07:14.739302  521964 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1201 21:07:14.739308  521964 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1201 21:07:14.739310  521964 command_runner.go:130] > #
	I1201 21:07:14.739316  521964 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1201 21:07:14.739322  521964 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1201 21:07:14.739325  521964 command_runner.go:130] > # limitation.
	I1201 21:07:14.739329  521964 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1201 21:07:14.739334  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1201 21:07:14.739337  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739341  521964 command_runner.go:130] > runtime_root = "/run/crun"
	I1201 21:07:14.739345  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739356  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739360  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739365  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739369  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739373  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739380  521964 command_runner.go:130] > allowed_annotations = [
	I1201 21:07:14.739384  521964 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1201 21:07:14.739391  521964 command_runner.go:130] > ]
	I1201 21:07:14.739396  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739400  521964 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1201 21:07:14.739409  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1201 21:07:14.739413  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739420  521964 command_runner.go:130] > runtime_root = "/run/runc"
	I1201 21:07:14.739434  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739442  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739450  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739455  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739459  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739465  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739470  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739481  521964 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1201 21:07:14.739490  521964 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1201 21:07:14.739507  521964 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1201 21:07:14.739519  521964 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1201 21:07:14.739534  521964 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1201 21:07:14.739546  521964 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1201 21:07:14.739559  521964 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1201 21:07:14.739569  521964 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1201 21:07:14.739589  521964 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1201 21:07:14.739601  521964 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1201 21:07:14.739616  521964 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1201 21:07:14.739627  521964 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1201 21:07:14.739635  521964 command_runner.go:130] > # Example:
	I1201 21:07:14.739639  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1201 21:07:14.739652  521964 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1201 21:07:14.739663  521964 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1201 21:07:14.739669  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1201 21:07:14.739672  521964 command_runner.go:130] > # cpuset = "0-1"
	I1201 21:07:14.739681  521964 command_runner.go:130] > # cpushares = "5"
	I1201 21:07:14.739685  521964 command_runner.go:130] > # cpuquota = "1000"
	I1201 21:07:14.739694  521964 command_runner.go:130] > # cpuperiod = "100000"
	I1201 21:07:14.739698  521964 command_runner.go:130] > # cpulimit = "35"
	I1201 21:07:14.739705  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.739709  521964 command_runner.go:130] > # The workload name is workload-type.
	I1201 21:07:14.739716  521964 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1201 21:07:14.739728  521964 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1201 21:07:14.739739  521964 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1201 21:07:14.739752  521964 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1201 21:07:14.739762  521964 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1201 21:07:14.739768  521964 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1201 21:07:14.739778  521964 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1201 21:07:14.739786  521964 command_runner.go:130] > # Default value is set to true
	I1201 21:07:14.739791  521964 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1201 21:07:14.739803  521964 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1201 21:07:14.739813  521964 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1201 21:07:14.739818  521964 command_runner.go:130] > # Default value is set to 'false'
	I1201 21:07:14.739822  521964 command_runner.go:130] > # disable_hostport_mapping = false
	I1201 21:07:14.739830  521964 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1201 21:07:14.739839  521964 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1201 21:07:14.739846  521964 command_runner.go:130] > # timezone = ""
	I1201 21:07:14.739853  521964 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1201 21:07:14.739859  521964 command_runner.go:130] > #
	I1201 21:07:14.739866  521964 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1201 21:07:14.739884  521964 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1201 21:07:14.739892  521964 command_runner.go:130] > [crio.image]
	I1201 21:07:14.739898  521964 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1201 21:07:14.739903  521964 command_runner.go:130] > # default_transport = "docker://"
	I1201 21:07:14.739913  521964 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1201 21:07:14.739919  521964 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739926  521964 command_runner.go:130] > # global_auth_file = ""
	I1201 21:07:14.739931  521964 command_runner.go:130] > # The image used to instantiate infra containers.
	I1201 21:07:14.739940  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739952  521964 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.739964  521964 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1201 21:07:14.739973  521964 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739979  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739986  521964 command_runner.go:130] > # pause_image_auth_file = ""
	I1201 21:07:14.739993  521964 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1201 21:07:14.740002  521964 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1201 21:07:14.740009  521964 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1201 21:07:14.740029  521964 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1201 21:07:14.740037  521964 command_runner.go:130] > # pause_command = "/pause"
	I1201 21:07:14.740044  521964 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1201 21:07:14.740053  521964 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1201 21:07:14.740060  521964 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1201 21:07:14.740070  521964 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1201 21:07:14.740076  521964 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1201 21:07:14.740086  521964 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1201 21:07:14.740091  521964 command_runner.go:130] > # pinned_images = [
	I1201 21:07:14.740093  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740110  521964 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1201 21:07:14.740121  521964 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1201 21:07:14.740133  521964 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1201 21:07:14.740143  521964 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1201 21:07:14.740153  521964 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1201 21:07:14.740158  521964 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1201 21:07:14.740167  521964 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1201 21:07:14.740181  521964 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1201 21:07:14.740204  521964 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1201 21:07:14.740215  521964 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1201 21:07:14.740226  521964 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1201 21:07:14.740236  521964 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1201 21:07:14.740243  521964 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1201 21:07:14.740259  521964 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1201 21:07:14.740263  521964 command_runner.go:130] > # changing them here.
	I1201 21:07:14.740273  521964 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1201 21:07:14.740278  521964 command_runner.go:130] > # insecure_registries = [
	I1201 21:07:14.740285  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740293  521964 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1201 21:07:14.740302  521964 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1201 21:07:14.740306  521964 command_runner.go:130] > # image_volumes = "mkdir"
	I1201 21:07:14.740316  521964 command_runner.go:130] > # Temporary directory to use for storing big files
	I1201 21:07:14.740321  521964 command_runner.go:130] > # big_files_temporary_dir = ""
	I1201 21:07:14.740340  521964 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1201 21:07:14.740349  521964 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1201 21:07:14.740358  521964 command_runner.go:130] > # auto_reload_registries = false
	I1201 21:07:14.740364  521964 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1201 21:07:14.740376  521964 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1201 21:07:14.740387  521964 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1201 21:07:14.740391  521964 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1201 21:07:14.740399  521964 command_runner.go:130] > # The mode of short name resolution.
	I1201 21:07:14.740415  521964 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1201 21:07:14.740423  521964 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1201 21:07:14.740428  521964 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1201 21:07:14.740436  521964 command_runner.go:130] > # short_name_mode = "enforcing"
	I1201 21:07:14.740443  521964 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1201 21:07:14.740453  521964 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1201 21:07:14.740462  521964 command_runner.go:130] > # oci_artifact_mount_support = true
	I1201 21:07:14.740469  521964 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1201 21:07:14.740484  521964 command_runner.go:130] > # CNI plugins.
	I1201 21:07:14.740492  521964 command_runner.go:130] > [crio.network]
	I1201 21:07:14.740498  521964 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1201 21:07:14.740504  521964 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1201 21:07:14.740512  521964 command_runner.go:130] > # cni_default_network = ""
	I1201 21:07:14.740519  521964 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1201 21:07:14.740530  521964 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1201 21:07:14.740540  521964 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1201 21:07:14.740549  521964 command_runner.go:130] > # plugin_dirs = [
	I1201 21:07:14.740562  521964 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1201 21:07:14.740566  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740576  521964 command_runner.go:130] > # List of included pod metrics.
	I1201 21:07:14.740580  521964 command_runner.go:130] > # included_pod_metrics = [
	I1201 21:07:14.740583  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740588  521964 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1201 21:07:14.740596  521964 command_runner.go:130] > [crio.metrics]
	I1201 21:07:14.740602  521964 command_runner.go:130] > # Globally enable or disable metrics support.
	I1201 21:07:14.740614  521964 command_runner.go:130] > # enable_metrics = false
	I1201 21:07:14.740622  521964 command_runner.go:130] > # Specify enabled metrics collectors.
	I1201 21:07:14.740637  521964 command_runner.go:130] > # Per default all metrics are enabled.
	I1201 21:07:14.740644  521964 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1201 21:07:14.740655  521964 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1201 21:07:14.740662  521964 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1201 21:07:14.740666  521964 command_runner.go:130] > # metrics_collectors = [
	I1201 21:07:14.740674  521964 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1201 21:07:14.740680  521964 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1201 21:07:14.740688  521964 command_runner.go:130] > # 	"containers_oom_total",
	I1201 21:07:14.740692  521964 command_runner.go:130] > # 	"processes_defunct",
	I1201 21:07:14.740706  521964 command_runner.go:130] > # 	"operations_total",
	I1201 21:07:14.740714  521964 command_runner.go:130] > # 	"operations_latency_seconds",
	I1201 21:07:14.740719  521964 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1201 21:07:14.740727  521964 command_runner.go:130] > # 	"operations_errors_total",
	I1201 21:07:14.740731  521964 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1201 21:07:14.740736  521964 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1201 21:07:14.740740  521964 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1201 21:07:14.740748  521964 command_runner.go:130] > # 	"image_pulls_success_total",
	I1201 21:07:14.740753  521964 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1201 21:07:14.740761  521964 command_runner.go:130] > # 	"containers_oom_count_total",
	I1201 21:07:14.740766  521964 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1201 21:07:14.740780  521964 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1201 21:07:14.740789  521964 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1201 21:07:14.740792  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740803  521964 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1201 21:07:14.740807  521964 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1201 21:07:14.740812  521964 command_runner.go:130] > # The port on which the metrics server will listen.
	I1201 21:07:14.740816  521964 command_runner.go:130] > # metrics_port = 9090
	I1201 21:07:14.740825  521964 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1201 21:07:14.740829  521964 command_runner.go:130] > # metrics_socket = ""
	I1201 21:07:14.740839  521964 command_runner.go:130] > # The certificate for the secure metrics server.
	I1201 21:07:14.740846  521964 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1201 21:07:14.740867  521964 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1201 21:07:14.740879  521964 command_runner.go:130] > # certificate on any modification event.
	I1201 21:07:14.740883  521964 command_runner.go:130] > # metrics_cert = ""
	I1201 21:07:14.740888  521964 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1201 21:07:14.740897  521964 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1201 21:07:14.740901  521964 command_runner.go:130] > # metrics_key = ""
	I1201 21:07:14.740912  521964 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1201 21:07:14.740916  521964 command_runner.go:130] > [crio.tracing]
	I1201 21:07:14.740933  521964 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1201 21:07:14.740941  521964 command_runner.go:130] > # enable_tracing = false
	I1201 21:07:14.740946  521964 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1201 21:07:14.740959  521964 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1201 21:07:14.740966  521964 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1201 21:07:14.740970  521964 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1201 21:07:14.740975  521964 command_runner.go:130] > # CRI-O NRI configuration.
	I1201 21:07:14.740982  521964 command_runner.go:130] > [crio.nri]
	I1201 21:07:14.740987  521964 command_runner.go:130] > # Globally enable or disable NRI.
	I1201 21:07:14.740993  521964 command_runner.go:130] > # enable_nri = true
	I1201 21:07:14.741004  521964 command_runner.go:130] > # NRI socket to listen on.
	I1201 21:07:14.741013  521964 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1201 21:07:14.741018  521964 command_runner.go:130] > # NRI plugin directory to use.
	I1201 21:07:14.741026  521964 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1201 21:07:14.741031  521964 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1201 21:07:14.741039  521964 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1201 21:07:14.741046  521964 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1201 21:07:14.741111  521964 command_runner.go:130] > # nri_disable_connections = false
	I1201 21:07:14.741122  521964 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1201 21:07:14.741131  521964 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1201 21:07:14.741137  521964 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1201 21:07:14.741142  521964 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1201 21:07:14.741156  521964 command_runner.go:130] > # NRI default validator configuration.
	I1201 21:07:14.741167  521964 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1201 21:07:14.741178  521964 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1201 21:07:14.741190  521964 command_runner.go:130] > # can be restricted/rejected:
	I1201 21:07:14.741198  521964 command_runner.go:130] > # - OCI hook injection
	I1201 21:07:14.741206  521964 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1201 21:07:14.741214  521964 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1201 21:07:14.741218  521964 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1201 21:07:14.741229  521964 command_runner.go:130] > # - adjustment of linux namespaces
	I1201 21:07:14.741241  521964 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1201 21:07:14.741252  521964 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1201 21:07:14.741262  521964 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1201 21:07:14.741268  521964 command_runner.go:130] > #
	I1201 21:07:14.741276  521964 command_runner.go:130] > # [crio.nri.default_validator]
	I1201 21:07:14.741281  521964 command_runner.go:130] > # nri_enable_default_validator = false
	I1201 21:07:14.741290  521964 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1201 21:07:14.741295  521964 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1201 21:07:14.741308  521964 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1201 21:07:14.741318  521964 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1201 21:07:14.741323  521964 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1201 21:07:14.741331  521964 command_runner.go:130] > # nri_validator_required_plugins = [
	I1201 21:07:14.741334  521964 command_runner.go:130] > # ]
	I1201 21:07:14.741344  521964 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1201 21:07:14.741350  521964 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1201 21:07:14.741357  521964 command_runner.go:130] > [crio.stats]
	I1201 21:07:14.741364  521964 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1201 21:07:14.741379  521964 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1201 21:07:14.741384  521964 command_runner.go:130] > # stats_collection_period = 0
	I1201 21:07:14.741390  521964 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1201 21:07:14.741400  521964 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1201 21:07:14.741409  521964 command_runner.go:130] > # collection_period = 0
	I1201 21:07:14.743695  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701489723Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1201 21:07:14.743741  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701919228Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1201 21:07:14.743753  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702192379Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1201 21:07:14.743761  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.70239116Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1201 21:07:14.743770  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702743464Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.743783  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.703251326Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1201 21:07:14.743797  521964 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1201 21:07:14.743882  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:14.743892  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:14.743907  521964 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:07:14.743929  521964 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:07:14.744055  521964 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:07:14.744124  521964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:07:14.751405  521964 command_runner.go:130] > kubeadm
	I1201 21:07:14.751425  521964 command_runner.go:130] > kubectl
	I1201 21:07:14.751429  521964 command_runner.go:130] > kubelet
	I1201 21:07:14.752384  521964 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:07:14.752448  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:07:14.760026  521964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:07:14.773137  521964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:07:14.786891  521964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1201 21:07:14.799994  521964 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:07:14.803501  521964 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1201 21:07:14.803615  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.920306  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:15.405274  521964 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:07:15.405300  521964 certs.go:195] generating shared ca certs ...
	I1201 21:07:15.405343  521964 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:15.405542  521964 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:07:15.405589  521964 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:07:15.405597  521964 certs.go:257] generating profile certs ...
	I1201 21:07:15.405726  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:07:15.405806  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:07:15.405849  521964 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:07:15.405858  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1201 21:07:15.405870  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1201 21:07:15.405880  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1201 21:07:15.405895  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1201 21:07:15.405908  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1201 21:07:15.405920  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1201 21:07:15.405931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1201 21:07:15.405941  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1201 21:07:15.406006  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:07:15.406049  521964 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:07:15.406068  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:07:15.406113  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:07:15.406137  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:07:15.406172  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:07:15.406237  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:15.406287  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem -> /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.406308  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.406325  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.407085  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:07:15.435325  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:07:15.460453  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:07:15.484820  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:07:15.503541  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:07:15.522001  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:07:15.540074  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:07:15.557935  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:07:15.576709  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:07:15.595484  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:07:15.614431  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:07:15.632609  521964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:07:15.645463  521964 ssh_runner.go:195] Run: openssl version
	I1201 21:07:15.651732  521964 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1201 21:07:15.652120  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:07:15.660522  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664099  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664137  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664196  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.704899  521964 command_runner.go:130] > 51391683
	I1201 21:07:15.705348  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:07:15.713374  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:07:15.721756  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725563  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725613  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725662  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.766341  521964 command_runner.go:130] > 3ec20f2e
	I1201 21:07:15.766756  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:07:15.774531  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:07:15.784868  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788871  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788929  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788991  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.829962  521964 command_runner.go:130] > b5213941
	I1201 21:07:15.830101  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:07:15.838399  521964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842255  521964 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842282  521964 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1201 21:07:15.842289  521964 command_runner.go:130] > Device: 259,1	Inode: 2345358     Links: 1
	I1201 21:07:15.842296  521964 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:15.842308  521964 command_runner.go:130] > Access: 2025-12-01 21:03:07.261790641 +0000
	I1201 21:07:15.842313  521964 command_runner.go:130] > Modify: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842318  521964 command_runner.go:130] > Change: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842324  521964 command_runner.go:130] >  Birth: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842405  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:07:15.883885  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.884377  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:07:15.925029  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.925488  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:07:15.967363  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.967505  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:07:16.008933  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.009470  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:07:16.052395  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.052881  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:07:16.094441  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.094868  521964 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:16.094970  521964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:07:16.095033  521964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:07:16.122671  521964 cri.go:89] found id: ""
	I1201 21:07:16.122745  521964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:07:16.129629  521964 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1201 21:07:16.129704  521964 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1201 21:07:16.129749  521964 command_runner.go:130] > /var/lib/minikube/etcd:
	I1201 21:07:16.130618  521964 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:07:16.130634  521964 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:07:16.130700  521964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:07:16.138263  521964 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:07:16.138690  521964 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-198694" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.138796  521964 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-482752/kubeconfig needs updating (will repair): [kubeconfig missing "functional-198694" cluster setting kubeconfig missing "functional-198694" context setting]
	I1201 21:07:16.139097  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.139560  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.139697  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.140229  521964 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 21:07:16.140256  521964 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 21:07:16.140265  521964 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 21:07:16.140270  521964 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 21:07:16.140285  521964 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 21:07:16.140581  521964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:07:16.140673  521964 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1201 21:07:16.148484  521964 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1201 21:07:16.148518  521964 kubeadm.go:602] duration metric: took 17.877938ms to restartPrimaryControlPlane
	I1201 21:07:16.148528  521964 kubeadm.go:403] duration metric: took 53.667619ms to StartCluster
	I1201 21:07:16.148545  521964 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.148604  521964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.149244  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.149450  521964 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 21:07:16.149837  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:16.149887  521964 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 21:07:16.149959  521964 addons.go:70] Setting storage-provisioner=true in profile "functional-198694"
	I1201 21:07:16.149971  521964 addons.go:239] Setting addon storage-provisioner=true in "functional-198694"
	I1201 21:07:16.149997  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.150469  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.150813  521964 addons.go:70] Setting default-storageclass=true in profile "functional-198694"
	I1201 21:07:16.150847  521964 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-198694"
	I1201 21:07:16.151095  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.157800  521964 out.go:179] * Verifying Kubernetes components...
	I1201 21:07:16.160495  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:16.191854  521964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 21:07:16.194709  521964 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.194728  521964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 21:07:16.194804  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.200857  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.201020  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.201620  521964 addons.go:239] Setting addon default-storageclass=true in "functional-198694"
	I1201 21:07:16.201664  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.202447  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.245603  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.261120  521964 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:16.261144  521964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 21:07:16.261216  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.294119  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.373164  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:16.408855  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.445769  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.156317  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156488  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156559  521964 retry.go:31] will retry after 323.483538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156628  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156673  521964 retry.go:31] will retry after 132.387182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156540  521964 node_ready.go:35] waiting up to 6m0s for node "functional-198694" to be "Ready" ...
	I1201 21:07:17.156859  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.156951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.289607  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.345927  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.349389  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.349423  521964 retry.go:31] will retry after 369.598465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.480797  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.537300  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.541071  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.541105  521964 retry.go:31] will retry after 250.665906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.657414  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.657490  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.657803  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.720223  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.783305  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.783341  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.783362  521964 retry.go:31] will retry after 375.003536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.792548  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.854946  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.854989  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.855009  521964 retry.go:31] will retry after 643.882626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.157670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.158003  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:18.159267  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:18.225579  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.225683  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.225726  521964 retry.go:31] will retry after 1.172405999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.500161  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:18.566908  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.566958  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.566979  521964 retry.go:31] will retry after 1.221518169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.657190  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.657601  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.157332  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.157408  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.157736  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:19.157807  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:19.398291  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:19.478299  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.478401  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.478424  521964 retry.go:31] will retry after 725.636222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.657755  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.658075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.789414  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:19.847191  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.847229  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.847250  521964 retry.go:31] will retry after 688.680113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.157514  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.157586  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.157835  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:20.205210  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:20.265409  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.265448  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.265467  521964 retry.go:31] will retry after 1.46538703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.536913  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:20.597058  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.597109  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.597130  521964 retry.go:31] will retry after 1.65793185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.657434  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.657509  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.657856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.157726  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.157805  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.158133  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:21.158204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:21.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.657048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.731621  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:21.794486  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:21.794526  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:21.794546  521964 retry.go:31] will retry after 2.907930062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:22.255851  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:22.319449  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:22.319491  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.319511  521964 retry.go:31] will retry after 2.874628227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.157294  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:23.657472  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:24.157139  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.157221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.157543  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.657245  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.657316  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.657622  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.702795  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:24.765996  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:24.766044  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:24.766064  521964 retry.go:31] will retry after 4.286350529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.157658  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.157735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.158024  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:25.194368  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:25.250297  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:25.253946  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.253992  521964 retry.go:31] will retry after 4.844090269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.657643  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.657986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:25.658042  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:26.157893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.157964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.158227  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:26.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.657225  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.657521  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.657272  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:28.156970  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.157420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:28.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:28.657148  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.657244  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.657592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.053156  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:29.109834  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:29.112973  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.113004  521964 retry.go:31] will retry after 7.544668628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.157507  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.657043  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:30.099244  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:30.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.157941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.158210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:30.158254  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:30.164980  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:30.165032  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.165052  521964 retry.go:31] will retry after 3.932491359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.657621  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.657701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.657964  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.157809  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.657377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.157020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.656981  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:32.657449  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:33.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:33.657102  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.657175  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.097811  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:34.156372  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:34.156417  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.156437  521964 retry.go:31] will retry after 10.974576666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.157589  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.157652  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.157912  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.657701  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.657780  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.658097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:34.658164  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:35.157826  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.157905  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.158165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:35.656910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.656988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.157319  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.157409  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.657573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.657912  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:36.658034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.730483  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:36.730533  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:36.730554  521964 retry.go:31] will retry after 6.063500375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:37.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.157097  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:37.157505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:37.657206  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.657296  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.657631  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.157704  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.157772  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.158095  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.657966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.658289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.156875  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.156971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.157322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.657329  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:39.657378  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:40.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:40.657085  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.657161  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.157198  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.157267  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.657708  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.658115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:41.658168  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:42.157124  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.157211  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.157646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.657398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.794843  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:42.853617  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:42.853659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:42.853680  521964 retry.go:31] will retry after 14.65335173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:43.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:43.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:44.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:44.157384  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:44.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.131211  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:45.157806  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.157891  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.221334  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:45.221384  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.221409  521964 retry.go:31] will retry after 11.551495399s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:46.157214  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.157292  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.157581  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:46.157642  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:46.657575  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.657647  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.657977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.157285  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.157350  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.157647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.156986  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.657048  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.657118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:48.657502  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:49.157019  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.157102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.157404  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:49.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.657208  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.657513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.156941  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.157013  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.157268  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.657077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.657401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:51.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:51.157620  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:51.657409  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.657480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.657812  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.157619  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.157701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.158034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.657819  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.657897  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.658222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:53.157452  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.157532  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.157789  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:53.157829  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:53.657659  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.657737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.658067  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.157887  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.157963  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.158311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.656941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.657207  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.156998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.656937  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:55.657445  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:56.157203  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.157283  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.157556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.657510  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.657589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.657925  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.773160  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:56.828599  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:56.831983  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:56.832017  521964 retry.go:31] will retry after 19.593958555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.157556  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.157632  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.157962  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:57.507290  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:57.561691  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:57.565020  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.565054  521964 retry.go:31] will retry after 13.393925675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.657318  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.657711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:57.657760  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:58.157573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.157646  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.157951  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:58.657731  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.657806  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.658143  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.157844  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.158113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.657909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.657992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.658327  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:59.658388  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:00.157067  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.157155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:00.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.656981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.657427  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:02.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.157192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.157450  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:02.157491  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:02.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.156950  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.657724  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.658043  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:04.157851  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.157926  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:04.158353  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:04.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.656984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.657308  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.159258  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.159335  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.159644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.157419  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.157493  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.157828  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.657668  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.657743  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.658026  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:06.658074  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:07.157783  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.157860  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.158171  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:07.656931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.657012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.657345  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.157032  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.157106  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.157464  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.657254  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:09.157296  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.157697  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:09.157750  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:09.657059  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.156962  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.157037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.157365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.656967  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.657051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.960044  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:11.016321  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:11.019785  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.019824  521964 retry.go:31] will retry after 44.695855679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.156928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.157315  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:11.657003  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:11.657463  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:12.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.157770  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:12.657058  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.657388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.657169  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.657467  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:13.657512  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:14.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.157012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:14.657025  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.657098  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.157163  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.157273  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.657300  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.657393  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:15.657762  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:16.157618  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.158073  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:16.426568  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:16.504541  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:16.504580  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.504599  521964 retry.go:31] will retry after 41.569353087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.657931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.658002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.658310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.156879  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.156968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.157222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.657405  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:18.157142  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.157229  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.157610  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:18.157665  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:18.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.657865  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.658174  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.156967  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.157284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.657096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.657452  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.657458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:20.657526  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:21.157000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:21.656883  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.656968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.657320  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.157049  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.157135  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.157505  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.657283  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.657387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.657820  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:22.657893  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:23.157642  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.157715  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.157983  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:23.657627  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.657716  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.658152  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.157478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.657185  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.657275  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.657653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:25.157527  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.157631  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.158006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:25.158072  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:25.657861  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.157315  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.157387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.157664  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.657761  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.657845  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.658250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.657204  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.657277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:27.657627  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:28.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.157095  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.157476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:28.657072  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.657162  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.657537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.157417  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.157501  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.157799  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.657718  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.657811  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.658220  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:29.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:30.156978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.157057  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:30.656889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.656971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.657275  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.157026  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.157118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:32.157753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.157835  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.158232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:32.158291  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:32.657000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.657475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.157220  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.157305  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.157692  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.657487  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.157729  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.157800  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.656912  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:34.657482  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:35.157152  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.157546  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:35.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.157282  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.157367  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.157727  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.657599  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.657686  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.657988  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:36.658045  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:37.157802  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.157896  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.158276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:37.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.657119  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.157842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.158130  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.657916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.657997  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.658359  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:38.658421  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:39.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:39.657230  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.657317  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.657685  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.157525  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.157997  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.657880  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.657968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.658348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:41.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.157382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:41.157447  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:41.657680  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.657767  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.658134  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.157525  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.657312  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:43.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.157479  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:43.157548  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:43.657235  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.657325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.657683  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.157581  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.158002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.657915  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.658331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:45.157080  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:45.157719  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:45.656935  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.657016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.657311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.157385  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.157475  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.157855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.657753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.657842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:47.157536  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.157944  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:47.157998  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:47.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.657826  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.658196  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.157876  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.157958  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.158348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.657375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.657287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.657715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:49.657793  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:50.157561  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.157644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.157981  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:50.657775  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.658229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.156948  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.656916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.656999  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.657330  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:52.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.157094  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:52.157551  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:52.657260  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.657345  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.157505  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.157589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.157948  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.657814  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.657901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.658274  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.157033  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.157120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.157494  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.657829  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.658226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:54.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:55.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.657040  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.657127  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.716783  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:55.791498  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795332  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795559  521964 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:56.157158  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.157619  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:56.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.658038  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:57.157909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.157989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.158351  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:57.158413  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:57.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.656992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.074174  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:58.149106  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149168  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149265  521964 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:58.152649  521964 out.go:179] * Enabled addons: 
	I1201 21:08:58.156383  521964 addons.go:530] duration metric: took 1m42.00648536s for enable addons: enabled=[]
	I1201 21:08:58.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.157352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.157737  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.657670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.658025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.157338  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.157435  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.658051  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:59.658126  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:00.157924  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.158055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.158429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:00.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.157113  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.157519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.657045  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.657523  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:02.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.157730  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:02.157812  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:02.657697  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.658264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.157016  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.157506  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.656940  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.657317  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.157621  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.657376  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.657464  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.657841  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:04.657911  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:05.157626  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.158028  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:05.657928  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.658022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.658411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.157283  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.157384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.157756  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.657421  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.657507  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.657800  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:07.157695  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.157786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.158194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:07.158265  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:07.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.657425  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.157836  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.158191  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.657104  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.657023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.657120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:09.657606  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:10.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.157086  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.157484  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:10.657232  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.657327  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.657688  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.157620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.157927  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.656987  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:12.157102  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.157196  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:12.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:12.657123  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.657203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.157438  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.657049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:14.157820  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.158213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:14.158267  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.157262  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.657581  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.657709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.658011  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.157709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.657457  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.657635  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.658136  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:16.658210  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:17.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.157017  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.157412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:17.657169  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.657255  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.657728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.157890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.158292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:19.157017  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.157103  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:19.157588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:19.657290  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.657384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.657811  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.157631  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.157730  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.158033  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.657806  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.657889  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.658276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.157070  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.157465  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.657335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:21.657390  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:22.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.157477  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:22.657014  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.657111  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.657539  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.157195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.657519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:23.657588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:24.157112  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.157201  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.157599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:24.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.657673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.657225  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.657322  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:25.657784  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:26.157490  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.157896  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:26.657062  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.657152  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.656936  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.657384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:28.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.157101  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.157533  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:28.157613  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:28.657356  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.657444  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.657855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.157718  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.158017  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.657847  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.658379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:30.157140  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.157673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:30.157765  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:30.657527  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.657947  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.157843  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.157942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.158394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.657184  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.657662  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:32.157380  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.157463  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.157761  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:32.157813  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:32.657593  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.657683  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.658044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.157900  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.157992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.158384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.656918  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.657277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.156983  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:34.657466  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:35.157073  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.157156  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:35.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.657088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.157396  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.157480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.157836  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.657834  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:36.658171  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:37.156863  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.156942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.157295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:37.657055  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.657144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.657495  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.156908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.157238  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.657402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:39.157119  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.157202  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.157574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:39.157635  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:39.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.656951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.156899  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.157303  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.656905  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.656985  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.657322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:41.157534  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.157609  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:41.157915  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:41.657857  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.658297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.157048  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.157140  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.157537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.657274  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.657353  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.657634  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.157360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:43.657439  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:44.157645  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.157713  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.157985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:44.657826  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.657923  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.658392  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.157027  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.157125  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.157611  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.656842  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.656917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.657187  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:46.157288  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.157362  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.157699  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:46.157757  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:46.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.657642  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.658013  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.157757  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.158112  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.657894  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.657972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.157083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.657654  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.657937  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:48.657979  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:49.157706  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.157785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:49.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.657921  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.658333  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.156929  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.157000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.157277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:51.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.157528  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:51.157583  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:51.656908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.656978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.657247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.157355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.657082  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.657488  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.157030  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.157430  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.656984  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.657399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:53.657456  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:54.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:54.657665  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.657741  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.658010  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:56.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.157246  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.157570  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:56.157631  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:56.657418  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.657498  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.657830  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.157641  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.157734  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.158097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.657841  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.657910  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.156868  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.156944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:58.657513  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:59.157748  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.157815  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.158119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:59.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.656934  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.657255  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.182510  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.182611  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.182943  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.657771  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.657850  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.658154  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:00.658206  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:01.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.156992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:01.657214  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.657298  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.157865  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.157946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.158249  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.656955  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.657029  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:03.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.157411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:03.157464  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:03.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.657085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.657453  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.657224  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.657551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:05.159263  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.159342  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.159636  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:05.159683  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:05.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.157539  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.157637  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.158058  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.657526  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.657604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.657867  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.157646  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.157727  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.158042  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.657854  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.657935  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.658292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:07.658351  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:08.157603  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.157674  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.157973  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:08.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.657862  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.658197  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.156973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.656947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.657210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:10.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.157076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.157429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:10.157492  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:10.657080  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.657192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.157228  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.657517  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.657597  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:12.157792  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.157864  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:12.158240  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:12.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.656959  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.157415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.657121  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.657199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.657550  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:14.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.157913  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.158250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:14.158314  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.157065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.157428  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.656989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.657251  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.157705  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.657618  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.657700  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:16.658091  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:17.157765  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.157836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:17.657888  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.657971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.658355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.657112  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:19.156976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:19.157452  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:19.657118  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.657191  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.657516  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.157379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.656945  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.657020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.657391  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:21.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.157552  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:21.157608  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:21.657312  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.657400  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.657677  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.156963  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.657368  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.156906  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.157247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.657411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:23.657467  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:24.157128  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.157203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:24.657808  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.657883  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.658178  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.156896  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.156988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.657068  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.657155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:25.657581  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:26.157344  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.157430  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.157711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:26.657676  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.657747  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.658068  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.157849  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.157936  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.158262  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:28.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.156978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.157356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:28.157423  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:28.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.157277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.157661  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.657507  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.657974  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:30.157860  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.157951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.158382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:30.158453  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:30.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.656991  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.157077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.657398  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.657481  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.157604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.157880  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.657746  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.657828  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.658176  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:32.658229  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:33.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.157018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:33.657643  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.657710  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.658006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.157894  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.158278  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.657059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:35.157082  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.157199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:35.157521  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:35.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.657353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.157368  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.157452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.157808  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.657277  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.657352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.657623  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.156972  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.157053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.656998  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.657079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.657415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:37.657471  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:38.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.157242  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:38.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.657036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.157041  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.657723  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.657992  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:39.658033  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:40.157791  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.157881  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.158267  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:40.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.157040  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.157114  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.157371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.657289  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.657371  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.657729  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:42.157592  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.157681  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:42.158193  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:42.657466  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.657542  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.657815  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.157576  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.157658  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.158000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.657674  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.657745  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.658086  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.157304  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.157391  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.657534  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.657625  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.657958  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:44.658013  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:45.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.157928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.158336  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:45.657663  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.657751  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.658031  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.157548  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.157629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.157950  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.657877  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.657952  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.658291  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:46.658347  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:47.156857  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.156933  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.157198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:47.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.157015  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.157423  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.657541  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.657618  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.657936  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:49.157607  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.157694  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.158025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:49.158076  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:49.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.658194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.157521  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.157593  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.157864  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.657707  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.658124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:51.157805  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.157886  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:51.158279  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:51.657127  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.657207  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.657471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.157004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.157305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.656968  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.657379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.156947  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.157022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.157288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.657360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:53.657416  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:54.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.157189  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.157007  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.657242  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.657323  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.657660  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:55.657717  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:56.157590  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.157668  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.157942  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:56.657918  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.657994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.658356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.157377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.657638  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.657712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.657982  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:57.658023  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:58.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.158147  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:58.656879  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.656954  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.157246  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:00.157201  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.157287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:00.157684  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:00.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.658231  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.157426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.156872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.156950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.157232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.656970  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:02.657392  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:03.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:03.656873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.656949  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.657257  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.657086  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.657170  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.657515  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:04.657568  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:05.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.157855  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.158116  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:05.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.657976  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.658256  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.157325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.157672  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.657576  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:06.657957  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:07.157699  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.157770  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.158064  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:07.657781  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.658224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.157367  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.157437  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.657968  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:08.658028  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:09.157829  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.157911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.158288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:09.656917  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.657288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.156991  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.657170  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.657248  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.657599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:11.156833  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.156912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.157200  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:11.157249  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:11.656972  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.657556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.157243  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.157318  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.157669  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.657823  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.657911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.658208  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:13.156933  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.157369  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:13.157434  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:13.657105  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.657190  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.657535  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.157809  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.157875  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.158149  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.657913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.658000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:15.156989  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:15.157479  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:15.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.657004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.657310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.157234  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.157328  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.657344  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.657439  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.657980  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:17.157136  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.157223  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.157592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:17.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:17.657532  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.657620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.657985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.157793  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.157869  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.657332  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.657414  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.657739  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:19.157633  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.157712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.158075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:19.158138  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:19.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.656944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.157129  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.157538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.657069  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.157653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.657489  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.657579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.657887  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:21.657951  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:22.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.157807  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.158188  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:22.656943  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.157143  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.157413  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:24.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.157227  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:24.157604  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:24.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.658165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.657269  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.657598  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:26.157266  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.157339  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.157618  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:26.157661  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:26.657561  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.657639  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.658002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.157818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.157901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.158277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.657008  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.657338  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.157024  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.157108  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.157462  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.657032  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.657112  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:28.657505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:29.157808  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:29.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.157157  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.657451  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.657748  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:30.657794  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:31.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.157692  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.158099  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:31.657089  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.657530  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:33.157120  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:33.157650  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:33.656925  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.657282  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.157085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.657236  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.657650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.156987  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.157331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.657385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:35.657436  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:36.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.157365  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.157713  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.657874  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.658213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.156873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.156946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.656921  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:38.157094  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:38.157537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:38.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.657414  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.157117  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.157513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.656888  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.157358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.657069  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.657148  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:40.657538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:41.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.156983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.157301  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:41.657216  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.657295  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.657644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.157003  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.157475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.657872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.658284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:42.658338  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:43.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.157034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.157374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:43.657103  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.657182  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.156866  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.156937  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.157219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.657376  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:45.157037  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.157482  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:45.157545  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:45.657188  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.657259  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.157054  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.157131  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.157180  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.657093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:47.657462  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:48.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:48.657126  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.657197  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.657487  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.657346  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:50.156927  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.157276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:50.157327  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:50.657022  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:52.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:52.157465  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:52.657158  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.657238  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.156907  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.157259  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.657409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.157400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:54.657346  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:55.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:55.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.657357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.157331  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.157412  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.657721  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:56.658204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:57.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:57.657664  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.657735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.157786  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.157861  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.657007  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.657100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:59.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.157823  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.158141  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:59.158186  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:59.656847  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.656927  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.657290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.157062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.657065  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.657419  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.157080  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.157418  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.657452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:01.657861  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:02.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:02.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.657050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.157177  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.157545  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.657864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.658290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:03.658354  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:04.157043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.157122  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.157481  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:04.657071  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.657150  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.157762  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.158111  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.657870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.658003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.658357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:05.658411  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:06.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.157261  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.157642  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:06.657501  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.657577  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.657845  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.157682  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.157766  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.656894  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.656972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:08.157028  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:08.157437  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:08.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.157160  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.157245  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.657243  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.156932  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.657029  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:10.657537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:11.157239  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.157313  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.157609  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:11.657334  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.657410  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.657733  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.157529  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.157603  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.157977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.657303  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.657379  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.657647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:12.657692  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:13.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.157445  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:13.657161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.657236  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.657560  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.157233  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:15.157135  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.157216  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:15.157629  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:15.657856  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.657928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.658198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.157210  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.157294  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.657580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:17.157664  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.157737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.158007  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:17.158051  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:17.657817  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.657893  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.658321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.157126  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.157218  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.657309  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.657377  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.657641  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.157459  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.157533  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.657700  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.657774  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.658113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:19.658170  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:20.157420  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.157499  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.157831  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:20.657717  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.657790  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.658137  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.156870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.156955  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.157335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.656896  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.656973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.657240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:22.156959  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.157337  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:22.157382  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:22.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.657035  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.657334  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.157240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.657321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:24.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.157353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:24.157404  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:24.657661  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.657744  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.658139  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.156898  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.657004  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.657473  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:26.157364  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.157445  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:26.157767  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:26.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.657820  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.157901  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.157983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.158328  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.657232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.156968  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.157396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.657122  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.657193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.657567  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:28.657618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:29.157156  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.157234  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:29.656952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.156982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.157060  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.657692  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.657762  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.658041  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:30.658082  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:31.157866  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.157947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.158324  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:31.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.157144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.656944  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:33.156964  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.157045  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.157424  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:33.157484  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:33.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.657209  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.157049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.157398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.657117  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.657200  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:35.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.158226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:35.158268  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:35.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.157253  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.157329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.157665  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.657154  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.657221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.657490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.157161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.157578  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.657242  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.657583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:37.657637  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:38.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.156993  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.157311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.157541  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.657246  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.657614  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:40.157008  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.157402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:40.157459  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:40.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.156917  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.157011  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.157297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:42.157169  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.157262  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.157666  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:42.157723  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:42.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.656961  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.156956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.157047  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.657015  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.157261  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.657068  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.657431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:44.657488  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:45.157013  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.157431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:45.657107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.657476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.157495  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.157580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.157930  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.656884  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:47.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.157100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:47.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:47.656956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.157373  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.657325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:49.157039  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.157480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:49.157538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.657039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.657352  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.156960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.157229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.656950  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:51.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:51.157618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:51.657566  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.657641  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.157799  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.157888  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.158264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.657426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:53.157683  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.157769  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.158044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:53.158097  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:53.657845  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.657932  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.156954  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.157044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.657370  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.157133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.157212  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.657404  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.657768  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:55.657823  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:56.157456  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.157537  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.157827  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:56.657750  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.657836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.658210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.657457  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:58.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.157072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:58.157532  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:58.657043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.657124  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.156864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.156938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.157199  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.656974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.657286  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:00.157057  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.157147  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:00.157569  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:00.657428  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.657504  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.157663  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.157764  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.158124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:02.157714  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.157793  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.158080  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:02.158125  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:02.657871  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.658316  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.156973  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.157183  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.657241  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.657321  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.657639  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:04.657698  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:05.156921  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.157001  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.157325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:05.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.657437  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.157391  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.157477  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.157856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.657298  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.657378  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.657684  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:06.657732  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:07.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.157929  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:07.657804  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.658219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.157597  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.157669  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.157933  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.657711  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.657785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.658162  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:08.658217  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:09.156936  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.157375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:09.657620  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.657765  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.157874  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.157960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.158354  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.656946  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.657358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:11.157610  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.157697  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.157986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:11.158031  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:11.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.657296  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.157004  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.657749  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.658023  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:13.157872  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.158289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:13.158341  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:13.656969  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.156916  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.156994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.157319  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.656957  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.657034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.657371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.157084  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.157470  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.656852  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.656945  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:15.657269  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:16.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.157728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:16.657690  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.657781  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.658180  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:17.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:13:17.157257  521964 node_ready.go:38] duration metric: took 6m0.000516111s for node "functional-198694" to be "Ready" ...
	I1201 21:13:17.164775  521964 out.go:203] 
	W1201 21:13:17.167674  521964 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1201 21:13:17.167697  521964 out.go:285] * 
	W1201 21:13:17.169852  521964 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:13:17.172668  521964 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.424023121Z" level=info msg="Using the internal default seccomp profile"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.424082033Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.424135652Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.424186474Z" level=info msg="RDT not available in the host system"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.42426561Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.425133768Z" level=info msg="Conmon does support the --sync option"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.425252149Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.425323122Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.426043787Z" level=info msg="Conmon does support the --sync option"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.426151059Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.426378968Z" level=info msg="Updated default CNI network name to "
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.427018437Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.427647075Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.4278005Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488125886Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488168388Z" level=info msg="Starting seccomp notifier watcher"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488219783Z" level=info msg="Create NRI interface"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488326505Z" level=info msg="built-in NRI default validator is disabled"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488335006Z" level=info msg="runtime interface created"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488348568Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488356355Z" level=info msg="runtime interface starting up..."
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.48836305Z" level=info msg="starting plugins..."
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488380461Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 01 21:07:14 functional-198694 crio[5973]: time="2025-12-01T21:07:14.488450457Z" level=info msg="No systemd watchdog enabled"
	Dec 01 21:07:14 functional-198694 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:13:21.864555    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:21.865101    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:21.866671    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:21.867115    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:21.868559    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:13:21 up  2:55,  0 user,  load average: 0.42, 0.29, 0.60
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:13:19 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:20 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 01 21:13:20 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:20 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:20 functional-198694 kubelet[9164]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:20 functional-198694 kubelet[9164]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:20 functional-198694 kubelet[9164]: E1201 21:13:20.213882    9164 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:20 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:20 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:20 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 01 21:13:20 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:20 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:20 functional-198694 kubelet[9199]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:20 functional-198694 kubelet[9199]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:20 functional-198694 kubelet[9199]: E1201 21:13:20.965182    9199 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:20 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:20 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:21 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 01 21:13:21 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:21 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:21 functional-198694 kubelet[9250]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:21 functional-198694 kubelet[9250]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:21 functional-198694 kubelet[9250]: E1201 21:13:21.717631    9250 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:21 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:21 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (388.536833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 kubectl -- --context functional-198694 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 kubectl -- --context functional-198694 get pods: exit status 1 (120.097598ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-198694 kubectl -- --context functional-198694 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (312.454544ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 logs -n 25: (1.10581907s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-074555 image ls --format short --alsologtostderr                                                                                       │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls --format yaml --alsologtostderr                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh     │ functional-074555 ssh pgrep buildkitd                                                                                                             │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ image   │ functional-074555 image ls --format json --alsologtostderr                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls --format table --alsologtostderr                                                                                       │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr                                            │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls                                                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ delete  │ -p functional-074555                                                                                                                              │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ start   │ -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ start   │ -p functional-198694 --alsologtostderr -v=8                                                                                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:07 UTC │                     │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:latest                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add minikube-local-cache-test:functional-198694                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache delete minikube-local-cache-test:functional-198694                                                                        │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl images                                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	│ cache   │ functional-198694 cache reload                                                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ kubectl │ functional-198694 kubectl -- --context functional-198694 get pods                                                                                 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:07:11
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:07:11.242920  521964 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:07:11.243351  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243387  521964 out.go:374] Setting ErrFile to fd 2...
	I1201 21:07:11.243410  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243711  521964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:07:11.244177  521964 out.go:368] Setting JSON to false
	I1201 21:07:11.245066  521964 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10181,"bootTime":1764613051,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:07:11.245167  521964 start.go:143] virtualization:  
	I1201 21:07:11.248721  521964 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:07:11.252584  521964 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:07:11.252676  521964 notify.go:221] Checking for updates...
	I1201 21:07:11.258436  521964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:07:11.261368  521964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:11.264327  521964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:07:11.267307  521964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:07:11.270189  521964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:07:11.273718  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:11.273862  521964 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:07:11.298213  521964 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:07:11.298331  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.359645  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.34998497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.359790  521964 docker.go:319] overlay module found
	I1201 21:07:11.364655  521964 out.go:179] * Using the docker driver based on existing profile
	I1201 21:07:11.367463  521964 start.go:309] selected driver: docker
	I1201 21:07:11.367488  521964 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.367603  521964 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:07:11.367700  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.423386  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.414394313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.423798  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:11.423867  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:11.423916  521964 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.427203  521964 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:07:11.430063  521964 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:07:11.433025  521964 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:07:11.436022  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:11.436110  521964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:07:11.455717  521964 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:07:11.455744  521964 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:07:11.500566  521964 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:07:11.687123  521964 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:07:11.687287  521964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:07:11.687539  521964 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:07:11.687581  521964 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.687647  521964 start.go:364] duration metric: took 33.501µs to acquireMachinesLock for "functional-198694"
	I1201 21:07:11.687664  521964 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:07:11.687669  521964 fix.go:54] fixHost starting: 
	I1201 21:07:11.687932  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:11.688204  521964 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688271  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:07:11.688285  521964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.581µs
	I1201 21:07:11.688306  521964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:07:11.688318  521964 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688354  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:07:11.688367  521964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 50.575µs
	I1201 21:07:11.688373  521964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688390  521964 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688439  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:07:11.688445  521964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 57.213µs
	I1201 21:07:11.688452  521964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688467  521964 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688503  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:07:11.688513  521964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.581µs
	I1201 21:07:11.688520  521964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688529  521964 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688566  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:07:11.688576  521964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 47.712µs
	I1201 21:07:11.688582  521964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688591  521964 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688628  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:07:11.688637  521964 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 46.916µs
	I1201 21:07:11.688643  521964 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:07:11.688652  521964 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688684  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:07:11.688693  521964 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 41.952µs
	I1201 21:07:11.688698  521964 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:07:11.688707  521964 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688742  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:07:11.688749  521964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 43.527µs
	I1201 21:07:11.688755  521964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:07:11.688763  521964 cache.go:87] Successfully saved all images to host disk.
	I1201 21:07:11.706210  521964 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:07:11.706244  521964 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:07:11.709560  521964 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:07:11.709599  521964 machine.go:94] provisionDockerMachine start ...
	I1201 21:07:11.709692  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.727308  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.727671  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.727690  521964 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:07:11.874686  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:11.874711  521964 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:07:11.874786  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.892845  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.893165  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.893181  521964 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:07:12.052942  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:12.053034  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.072030  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.072356  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.072379  521964 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:07:12.227676  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:07:12.227702  521964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:07:12.227769  521964 ubuntu.go:190] setting up certificates
	I1201 21:07:12.227787  521964 provision.go:84] configureAuth start
	I1201 21:07:12.227860  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:12.247353  521964 provision.go:143] copyHostCerts
	I1201 21:07:12.247405  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247445  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:07:12.247463  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247541  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:07:12.247639  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247660  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:07:12.247665  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247698  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:07:12.247755  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247776  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:07:12.247785  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247814  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:07:12.247874  521964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:07:12.352949  521964 provision.go:177] copyRemoteCerts
	I1201 21:07:12.353031  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:07:12.353075  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.373178  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:12.479006  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1201 21:07:12.479125  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:07:12.496931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1201 21:07:12.497043  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:07:12.515649  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1201 21:07:12.515717  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 21:07:12.533930  521964 provision.go:87] duration metric: took 306.12888ms to configureAuth
	I1201 21:07:12.533957  521964 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:07:12.534156  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:12.534262  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.551972  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.552286  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.552304  521964 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:07:12.889959  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:07:12.889981  521964 machine.go:97] duration metric: took 1.180373916s to provisionDockerMachine
	I1201 21:07:12.889993  521964 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:07:12.890006  521964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:07:12.890086  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:07:12.890139  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.908762  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.018597  521964 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:07:13.022335  521964 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1201 21:07:13.022369  521964 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1201 21:07:13.022376  521964 command_runner.go:130] > VERSION_ID="12"
	I1201 21:07:13.022381  521964 command_runner.go:130] > VERSION="12 (bookworm)"
	I1201 21:07:13.022386  521964 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1201 21:07:13.022390  521964 command_runner.go:130] > ID=debian
	I1201 21:07:13.022396  521964 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1201 21:07:13.022401  521964 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1201 21:07:13.022407  521964 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1201 21:07:13.022493  521964 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:07:13.022513  521964 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:07:13.022526  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:07:13.022584  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:07:13.022685  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:07:13.022696  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /etc/ssl/certs/4860022.pem
	I1201 21:07:13.022772  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:07:13.022784  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> /etc/test/nested/copy/486002/hosts
	I1201 21:07:13.022828  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:07:13.031305  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:13.050359  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:07:13.069098  521964 start.go:296] duration metric: took 179.090292ms for postStartSetup
	I1201 21:07:13.069200  521964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:07:13.069250  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.087931  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.188150  521964 command_runner.go:130] > 18%
	I1201 21:07:13.188720  521964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:07:13.193507  521964 command_runner.go:130] > 161G
	I1201 21:07:13.195867  521964 fix.go:56] duration metric: took 1.508190835s for fixHost
	I1201 21:07:13.195933  521964 start.go:83] releasing machines lock for "functional-198694", held for 1.508273853s
	I1201 21:07:13.196019  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:13.216611  521964 ssh_runner.go:195] Run: cat /version.json
	I1201 21:07:13.216667  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.216936  521964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:07:13.216990  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.238266  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.249198  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.342561  521964 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1201 21:07:13.342766  521964 ssh_runner.go:195] Run: systemctl --version
	I1201 21:07:13.434302  521964 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1201 21:07:13.434432  521964 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1201 21:07:13.434476  521964 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1201 21:07:13.434562  521964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:07:13.473148  521964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1201 21:07:13.477954  521964 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1201 21:07:13.478007  521964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:07:13.478081  521964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:07:13.486513  521964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:07:13.486536  521964 start.go:496] detecting cgroup driver to use...
	I1201 21:07:13.486599  521964 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:07:13.486671  521964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:07:13.502588  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:07:13.515851  521964 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:07:13.515935  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:07:13.531981  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:07:13.545612  521964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:07:13.660013  521964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:07:13.783921  521964 docker.go:234] disabling docker service ...
	I1201 21:07:13.783999  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:07:13.801145  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:07:13.814790  521964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:07:13.959260  521964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:07:14.082027  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:07:14.096899  521964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:07:14.110653  521964 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1201 21:07:14.112111  521964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:07:14.112234  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.121522  521964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:07:14.121606  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.132262  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.141626  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.151111  521964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:07:14.160033  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.169622  521964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.178443  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.187976  521964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:07:14.194851  521964 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1201 21:07:14.196003  521964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:07:14.203835  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.312679  521964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:07:14.495171  521964 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:07:14.495301  521964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:07:14.499086  521964 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1201 21:07:14.499110  521964 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1201 21:07:14.499118  521964 command_runner.go:130] > Device: 0,72	Inode: 1746        Links: 1
	I1201 21:07:14.499125  521964 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:14.499150  521964 command_runner.go:130] > Access: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499176  521964 command_runner.go:130] > Modify: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499186  521964 command_runner.go:130] > Change: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499190  521964 command_runner.go:130] >  Birth: -
	I1201 21:07:14.499219  521964 start.go:564] Will wait 60s for crictl version
	I1201 21:07:14.499275  521964 ssh_runner.go:195] Run: which crictl
	I1201 21:07:14.502678  521964 command_runner.go:130] > /usr/local/bin/crictl
	I1201 21:07:14.502996  521964 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:07:14.524882  521964 command_runner.go:130] > Version:  0.1.0
	I1201 21:07:14.524906  521964 command_runner.go:130] > RuntimeName:  cri-o
	I1201 21:07:14.524912  521964 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1201 21:07:14.524918  521964 command_runner.go:130] > RuntimeApiVersion:  v1
	I1201 21:07:14.526840  521964 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:07:14.526982  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.553910  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.553933  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.553939  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.553944  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.553950  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.553971  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.553976  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.553980  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.553984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.553987  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.553991  521964 command_runner.go:130] >      static
	I1201 21:07:14.553994  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.553998  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.554001  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.554009  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.554012  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.554016  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.554020  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.554024  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.554028  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.556106  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.582720  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.582784  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.582817  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.582840  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.582863  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.582897  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.582922  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.582947  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.582984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.583008  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.583029  521964 command_runner.go:130] >      static
	I1201 21:07:14.583063  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.583085  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.583101  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.583121  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.583170  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.583196  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.583217  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.583262  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.583287  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.589911  521964 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:07:14.592808  521964 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:07:14.609405  521964 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:07:14.613461  521964 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1201 21:07:14.613638  521964 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:07:14.613753  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:14.613807  521964 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:07:14.655721  521964 command_runner.go:130] > {
	I1201 21:07:14.655745  521964 command_runner.go:130] >   "images":  [
	I1201 21:07:14.655750  521964 command_runner.go:130] >     {
	I1201 21:07:14.655758  521964 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1201 21:07:14.655763  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655768  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1201 21:07:14.655771  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655775  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655786  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1201 21:07:14.655790  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655794  521964 command_runner.go:130] >       "size":  "29035622",
	I1201 21:07:14.655798  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655803  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655811  521964 command_runner.go:130] >     },
	I1201 21:07:14.655815  521964 command_runner.go:130] >     {
	I1201 21:07:14.655825  521964 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1201 21:07:14.655839  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655846  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1201 21:07:14.655854  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655858  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655866  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1201 21:07:14.655871  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655876  521964 command_runner.go:130] >       "size":  "74488375",
	I1201 21:07:14.655880  521964 command_runner.go:130] >       "username":  "nonroot",
	I1201 21:07:14.655884  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655888  521964 command_runner.go:130] >     },
	I1201 21:07:14.655891  521964 command_runner.go:130] >     {
	I1201 21:07:14.655901  521964 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1201 21:07:14.655907  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655912  521964 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1201 21:07:14.655918  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655927  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655946  521964 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1201 21:07:14.655955  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655960  521964 command_runner.go:130] >       "size":  "60854229",
	I1201 21:07:14.655965  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.655974  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.655978  521964 command_runner.go:130] >       },
	I1201 21:07:14.655982  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655986  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655989  521964 command_runner.go:130] >     },
	I1201 21:07:14.655995  521964 command_runner.go:130] >     {
	I1201 21:07:14.656002  521964 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1201 21:07:14.656010  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656015  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1201 21:07:14.656018  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656024  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656033  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1201 21:07:14.656040  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656044  521964 command_runner.go:130] >       "size":  "84947242",
	I1201 21:07:14.656047  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656051  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656061  521964 command_runner.go:130] >       },
	I1201 21:07:14.656065  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656068  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656071  521964 command_runner.go:130] >     },
	I1201 21:07:14.656075  521964 command_runner.go:130] >     {
	I1201 21:07:14.656084  521964 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1201 21:07:14.656090  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656096  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1201 21:07:14.656100  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656106  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656115  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1201 21:07:14.656121  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656132  521964 command_runner.go:130] >       "size":  "72167568",
	I1201 21:07:14.656139  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656143  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656146  521964 command_runner.go:130] >       },
	I1201 21:07:14.656150  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656154  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656160  521964 command_runner.go:130] >     },
	I1201 21:07:14.656163  521964 command_runner.go:130] >     {
	I1201 21:07:14.656170  521964 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1201 21:07:14.656176  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656182  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1201 21:07:14.656185  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656209  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656218  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1201 21:07:14.656223  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656228  521964 command_runner.go:130] >       "size":  "74105124",
	I1201 21:07:14.656231  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656236  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656241  521964 command_runner.go:130] >     },
	I1201 21:07:14.656245  521964 command_runner.go:130] >     {
	I1201 21:07:14.656251  521964 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1201 21:07:14.656257  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656262  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1201 21:07:14.656268  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656272  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656279  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1201 21:07:14.656285  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656289  521964 command_runner.go:130] >       "size":  "49819792",
	I1201 21:07:14.656293  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656303  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656307  521964 command_runner.go:130] >       },
	I1201 21:07:14.656311  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656316  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656323  521964 command_runner.go:130] >     },
	I1201 21:07:14.656330  521964 command_runner.go:130] >     {
	I1201 21:07:14.656337  521964 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1201 21:07:14.656341  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656345  521964 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.656350  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656355  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656365  521964 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1201 21:07:14.656368  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656372  521964 command_runner.go:130] >       "size":  "517328",
	I1201 21:07:14.656378  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656383  521964 command_runner.go:130] >         "value":  "65535"
	I1201 21:07:14.656388  521964 command_runner.go:130] >       },
	I1201 21:07:14.656392  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656395  521964 command_runner.go:130] >       "pinned":  true
	I1201 21:07:14.656399  521964 command_runner.go:130] >     }
	I1201 21:07:14.656404  521964 command_runner.go:130] >   ]
	I1201 21:07:14.656408  521964 command_runner.go:130] > }
	I1201 21:07:14.656549  521964 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:07:14.656561  521964 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:07:14.656568  521964 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:07:14.656668  521964 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:07:14.656752  521964 ssh_runner.go:195] Run: crio config
	I1201 21:07:14.734869  521964 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1201 21:07:14.734915  521964 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1201 21:07:14.734928  521964 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1201 21:07:14.734945  521964 command_runner.go:130] > #
	I1201 21:07:14.734957  521964 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1201 21:07:14.734978  521964 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1201 21:07:14.734989  521964 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1201 21:07:14.735001  521964 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1201 21:07:14.735009  521964 command_runner.go:130] > # reload'.
	I1201 21:07:14.735017  521964 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1201 21:07:14.735028  521964 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1201 21:07:14.735038  521964 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1201 21:07:14.735051  521964 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1201 21:07:14.735059  521964 command_runner.go:130] > [crio]
	I1201 21:07:14.735069  521964 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1201 21:07:14.735078  521964 command_runner.go:130] > # containers images, in this directory.
	I1201 21:07:14.735108  521964 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1201 21:07:14.735125  521964 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1201 21:07:14.735149  521964 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1201 21:07:14.735158  521964 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1201 21:07:14.735167  521964 command_runner.go:130] > # imagestore = ""
	I1201 21:07:14.735180  521964 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1201 21:07:14.735200  521964 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1201 21:07:14.735401  521964 command_runner.go:130] > # storage_driver = "overlay"
	I1201 21:07:14.735416  521964 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1201 21:07:14.735422  521964 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1201 21:07:14.735427  521964 command_runner.go:130] > # storage_option = [
	I1201 21:07:14.735430  521964 command_runner.go:130] > # ]
	I1201 21:07:14.735440  521964 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1201 21:07:14.735447  521964 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1201 21:07:14.735451  521964 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1201 21:07:14.735457  521964 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1201 21:07:14.735464  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1201 21:07:14.735475  521964 command_runner.go:130] > # always happen on a node reboot
	I1201 21:07:14.735773  521964 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1201 21:07:14.735799  521964 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1201 21:07:14.735807  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1201 21:07:14.735813  521964 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1201 21:07:14.735817  521964 command_runner.go:130] > # version_file_persist = ""
	I1201 21:07:14.735825  521964 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1201 21:07:14.735839  521964 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1201 21:07:14.735844  521964 command_runner.go:130] > # internal_wipe = true
	I1201 21:07:14.735852  521964 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1201 21:07:14.735858  521964 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1201 21:07:14.735861  521964 command_runner.go:130] > # internal_repair = true
	I1201 21:07:14.735867  521964 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1201 21:07:14.735873  521964 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1201 21:07:14.735882  521964 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1201 21:07:14.735891  521964 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1201 21:07:14.735901  521964 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1201 21:07:14.735904  521964 command_runner.go:130] > [crio.api]
	I1201 21:07:14.735909  521964 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1201 21:07:14.735916  521964 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1201 21:07:14.735921  521964 command_runner.go:130] > # IP address on which the stream server will listen.
	I1201 21:07:14.735925  521964 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1201 21:07:14.735932  521964 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1201 21:07:14.735946  521964 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1201 21:07:14.735950  521964 command_runner.go:130] > # stream_port = "0"
	I1201 21:07:14.735958  521964 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1201 21:07:14.735962  521964 command_runner.go:130] > # stream_enable_tls = false
	I1201 21:07:14.735968  521964 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1201 21:07:14.735972  521964 command_runner.go:130] > # stream_idle_timeout = ""
	I1201 21:07:14.735981  521964 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1201 21:07:14.735991  521964 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1201 21:07:14.735995  521964 command_runner.go:130] > # stream_tls_cert = ""
	I1201 21:07:14.736001  521964 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1201 21:07:14.736006  521964 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1201 21:07:14.736013  521964 command_runner.go:130] > # stream_tls_key = ""
	I1201 21:07:14.736023  521964 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1201 21:07:14.736030  521964 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1201 21:07:14.736037  521964 command_runner.go:130] > # automatically pick up the changes.
	I1201 21:07:14.736045  521964 command_runner.go:130] > # stream_tls_ca = ""
	I1201 21:07:14.736072  521964 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736077  521964 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1201 21:07:14.736085  521964 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736092  521964 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1201 21:07:14.736099  521964 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1201 21:07:14.736105  521964 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1201 21:07:14.736108  521964 command_runner.go:130] > [crio.runtime]
	I1201 21:07:14.736114  521964 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1201 21:07:14.736119  521964 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1201 21:07:14.736127  521964 command_runner.go:130] > # "nofile=1024:2048"
	I1201 21:07:14.736134  521964 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1201 21:07:14.736138  521964 command_runner.go:130] > # default_ulimits = [
	I1201 21:07:14.736141  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736146  521964 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1201 21:07:14.736150  521964 command_runner.go:130] > # no_pivot = false
	I1201 21:07:14.736162  521964 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1201 21:07:14.736168  521964 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1201 21:07:14.736196  521964 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1201 21:07:14.736202  521964 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1201 21:07:14.736210  521964 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1201 21:07:14.736220  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736223  521964 command_runner.go:130] > # conmon = ""
	I1201 21:07:14.736228  521964 command_runner.go:130] > # Cgroup setting for conmon
	I1201 21:07:14.736235  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1201 21:07:14.736239  521964 command_runner.go:130] > conmon_cgroup = "pod"
	I1201 21:07:14.736257  521964 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1201 21:07:14.736262  521964 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1201 21:07:14.736269  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736273  521964 command_runner.go:130] > # conmon_env = [
	I1201 21:07:14.736276  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736281  521964 command_runner.go:130] > # Additional environment variables to set for all the
	I1201 21:07:14.736286  521964 command_runner.go:130] > # containers. These are overridden if set in the
	I1201 21:07:14.736295  521964 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1201 21:07:14.736302  521964 command_runner.go:130] > # default_env = [
	I1201 21:07:14.736308  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736314  521964 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1201 21:07:14.736322  521964 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1201 21:07:14.736328  521964 command_runner.go:130] > # selinux = false
	I1201 21:07:14.736356  521964 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1201 21:07:14.736370  521964 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1201 21:07:14.736375  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736379  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.736388  521964 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1201 21:07:14.736393  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736397  521964 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1201 21:07:14.736406  521964 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1201 21:07:14.736413  521964 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1201 21:07:14.736419  521964 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1201 21:07:14.736425  521964 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1201 21:07:14.736431  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736439  521964 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1201 21:07:14.736445  521964 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1201 21:07:14.736449  521964 command_runner.go:130] > # the cgroup blockio controller.
	I1201 21:07:14.736452  521964 command_runner.go:130] > # blockio_config_file = ""
	I1201 21:07:14.736459  521964 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1201 21:07:14.736463  521964 command_runner.go:130] > # blockio parameters.
	I1201 21:07:14.736467  521964 command_runner.go:130] > # blockio_reload = false
	I1201 21:07:14.736474  521964 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1201 21:07:14.736477  521964 command_runner.go:130] > # irqbalance daemon.
	I1201 21:07:14.736483  521964 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1201 21:07:14.736489  521964 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1201 21:07:14.736496  521964 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1201 21:07:14.736508  521964 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1201 21:07:14.736514  521964 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1201 21:07:14.736523  521964 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1201 21:07:14.736532  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736536  521964 command_runner.go:130] > # rdt_config_file = ""
	I1201 21:07:14.736545  521964 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1201 21:07:14.736550  521964 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1201 21:07:14.736555  521964 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1201 21:07:14.736560  521964 command_runner.go:130] > # separate_pull_cgroup = ""
	I1201 21:07:14.736569  521964 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1201 21:07:14.736576  521964 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1201 21:07:14.736580  521964 command_runner.go:130] > # will be added.
	I1201 21:07:14.736585  521964 command_runner.go:130] > # default_capabilities = [
	I1201 21:07:14.737078  521964 command_runner.go:130] > # 	"CHOWN",
	I1201 21:07:14.737092  521964 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1201 21:07:14.737096  521964 command_runner.go:130] > # 	"FSETID",
	I1201 21:07:14.737099  521964 command_runner.go:130] > # 	"FOWNER",
	I1201 21:07:14.737102  521964 command_runner.go:130] > # 	"SETGID",
	I1201 21:07:14.737106  521964 command_runner.go:130] > # 	"SETUID",
	I1201 21:07:14.737130  521964 command_runner.go:130] > # 	"SETPCAP",
	I1201 21:07:14.737134  521964 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1201 21:07:14.737138  521964 command_runner.go:130] > # 	"KILL",
	I1201 21:07:14.737144  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737153  521964 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1201 21:07:14.737160  521964 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1201 21:07:14.737165  521964 command_runner.go:130] > # add_inheritable_capabilities = false
	I1201 21:07:14.737171  521964 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1201 21:07:14.737189  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737193  521964 command_runner.go:130] > default_sysctls = [
	I1201 21:07:14.737198  521964 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1201 21:07:14.737200  521964 command_runner.go:130] > ]
	I1201 21:07:14.737205  521964 command_runner.go:130] > # List of devices on the host that a
	I1201 21:07:14.737212  521964 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1201 21:07:14.737215  521964 command_runner.go:130] > # allowed_devices = [
	I1201 21:07:14.737219  521964 command_runner.go:130] > # 	"/dev/fuse",
	I1201 21:07:14.737222  521964 command_runner.go:130] > # 	"/dev/net/tun",
	I1201 21:07:14.737225  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737230  521964 command_runner.go:130] > # List of additional devices. specified as
	I1201 21:07:14.737237  521964 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1201 21:07:14.737243  521964 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1201 21:07:14.737249  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737253  521964 command_runner.go:130] > # additional_devices = [
	I1201 21:07:14.737257  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737266  521964 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1201 21:07:14.737271  521964 command_runner.go:130] > # cdi_spec_dirs = [
	I1201 21:07:14.737274  521964 command_runner.go:130] > # 	"/etc/cdi",
	I1201 21:07:14.737277  521964 command_runner.go:130] > # 	"/var/run/cdi",
	I1201 21:07:14.737280  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737286  521964 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1201 21:07:14.737293  521964 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1201 21:07:14.737297  521964 command_runner.go:130] > # Defaults to false.
	I1201 21:07:14.737311  521964 command_runner.go:130] > # device_ownership_from_security_context = false
	I1201 21:07:14.737318  521964 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1201 21:07:14.737324  521964 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1201 21:07:14.737327  521964 command_runner.go:130] > # hooks_dir = [
	I1201 21:07:14.737335  521964 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1201 21:07:14.737338  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737344  521964 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1201 21:07:14.737352  521964 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1201 21:07:14.737357  521964 command_runner.go:130] > # its default mounts from the following two files:
	I1201 21:07:14.737360  521964 command_runner.go:130] > #
	I1201 21:07:14.737366  521964 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1201 21:07:14.737372  521964 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1201 21:07:14.737378  521964 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1201 21:07:14.737380  521964 command_runner.go:130] > #
	I1201 21:07:14.737386  521964 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1201 21:07:14.737393  521964 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1201 21:07:14.737399  521964 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1201 21:07:14.737407  521964 command_runner.go:130] > #      only add mounts it finds in this file.
	I1201 21:07:14.737410  521964 command_runner.go:130] > #
	I1201 21:07:14.737414  521964 command_runner.go:130] > # default_mounts_file = ""
	I1201 21:07:14.737422  521964 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1201 21:07:14.737429  521964 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1201 21:07:14.737433  521964 command_runner.go:130] > # pids_limit = -1
	I1201 21:07:14.737440  521964 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1201 21:07:14.737446  521964 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1201 21:07:14.737452  521964 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1201 21:07:14.737460  521964 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1201 21:07:14.737464  521964 command_runner.go:130] > # log_size_max = -1
	I1201 21:07:14.737472  521964 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1201 21:07:14.737476  521964 command_runner.go:130] > # log_to_journald = false
	I1201 21:07:14.737487  521964 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1201 21:07:14.737492  521964 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1201 21:07:14.737497  521964 command_runner.go:130] > # Path to directory for container attach sockets.
	I1201 21:07:14.737502  521964 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1201 21:07:14.737511  521964 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1201 21:07:14.737516  521964 command_runner.go:130] > # bind_mount_prefix = ""
	I1201 21:07:14.737521  521964 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1201 21:07:14.737528  521964 command_runner.go:130] > # read_only = false
	I1201 21:07:14.737534  521964 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1201 21:07:14.737541  521964 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1201 21:07:14.737545  521964 command_runner.go:130] > # live configuration reload.
	I1201 21:07:14.737549  521964 command_runner.go:130] > # log_level = "info"
	I1201 21:07:14.737557  521964 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1201 21:07:14.737563  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.737567  521964 command_runner.go:130] > # log_filter = ""
	I1201 21:07:14.737573  521964 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737583  521964 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1201 21:07:14.737588  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737596  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737599  521964 command_runner.go:130] > # uid_mappings = ""
	I1201 21:07:14.737606  521964 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737612  521964 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1201 21:07:14.737616  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737624  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737627  521964 command_runner.go:130] > # gid_mappings = ""
	I1201 21:07:14.737634  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1201 21:07:14.737640  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737646  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737660  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737665  521964 command_runner.go:130] > # minimum_mappable_uid = -1
	I1201 21:07:14.737674  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1201 21:07:14.737681  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737686  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737694  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737937  521964 command_runner.go:130] > # minimum_mappable_gid = -1
	I1201 21:07:14.737957  521964 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1201 21:07:14.737967  521964 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1201 21:07:14.737974  521964 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1201 21:07:14.737980  521964 command_runner.go:130] > # ctr_stop_timeout = 30
	I1201 21:07:14.737998  521964 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1201 21:07:14.738018  521964 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1201 21:07:14.738028  521964 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1201 21:07:14.738033  521964 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1201 21:07:14.738042  521964 command_runner.go:130] > # drop_infra_ctr = true
	I1201 21:07:14.738048  521964 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1201 21:07:14.738058  521964 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1201 21:07:14.738073  521964 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1201 21:07:14.738082  521964 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1201 21:07:14.738090  521964 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1201 21:07:14.738099  521964 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1201 21:07:14.738106  521964 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1201 21:07:14.738116  521964 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1201 21:07:14.738120  521964 command_runner.go:130] > # shared_cpuset = ""
	I1201 21:07:14.738130  521964 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1201 21:07:14.738139  521964 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1201 21:07:14.738154  521964 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1201 21:07:14.738162  521964 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1201 21:07:14.738167  521964 command_runner.go:130] > # pinns_path = ""
	I1201 21:07:14.738173  521964 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1201 21:07:14.738182  521964 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1201 21:07:14.738191  521964 command_runner.go:130] > # enable_criu_support = true
	I1201 21:07:14.738197  521964 command_runner.go:130] > # Enable/disable the generation of the container,
	I1201 21:07:14.738206  521964 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1201 21:07:14.738221  521964 command_runner.go:130] > # enable_pod_events = false
	I1201 21:07:14.738232  521964 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1201 21:07:14.738238  521964 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1201 21:07:14.738242  521964 command_runner.go:130] > # default_runtime = "crun"
	I1201 21:07:14.738251  521964 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1201 21:07:14.738259  521964 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1201 21:07:14.738269  521964 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1201 21:07:14.738278  521964 command_runner.go:130] > # creation as a file is not desired either.
	I1201 21:07:14.738287  521964 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1201 21:07:14.738304  521964 command_runner.go:130] > # the hostname is being managed dynamically.
	I1201 21:07:14.738322  521964 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1201 21:07:14.738329  521964 command_runner.go:130] > # ]
	I1201 21:07:14.738336  521964 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1201 21:07:14.738347  521964 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1201 21:07:14.738353  521964 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1201 21:07:14.738358  521964 command_runner.go:130] > # Each entry in the table should follow the format:
	I1201 21:07:14.738365  521964 command_runner.go:130] > #
	I1201 21:07:14.738381  521964 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1201 21:07:14.738387  521964 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1201 21:07:14.738394  521964 command_runner.go:130] > # runtime_type = "oci"
	I1201 21:07:14.738400  521964 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1201 21:07:14.738408  521964 command_runner.go:130] > # inherit_default_runtime = false
	I1201 21:07:14.738414  521964 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1201 21:07:14.738421  521964 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1201 21:07:14.738426  521964 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1201 21:07:14.738434  521964 command_runner.go:130] > # monitor_env = []
	I1201 21:07:14.738439  521964 command_runner.go:130] > # privileged_without_host_devices = false
	I1201 21:07:14.738449  521964 command_runner.go:130] > # allowed_annotations = []
	I1201 21:07:14.738459  521964 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1201 21:07:14.738463  521964 command_runner.go:130] > # no_sync_log = false
	I1201 21:07:14.738469  521964 command_runner.go:130] > # default_annotations = {}
	I1201 21:07:14.738473  521964 command_runner.go:130] > # stream_websockets = false
	I1201 21:07:14.738481  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.738515  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.738533  521964 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1201 21:07:14.738539  521964 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1201 21:07:14.738546  521964 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1201 21:07:14.738556  521964 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1201 21:07:14.738560  521964 command_runner.go:130] > #   in $PATH.
	I1201 21:07:14.738572  521964 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1201 21:07:14.738581  521964 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1201 21:07:14.738587  521964 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1201 21:07:14.738601  521964 command_runner.go:130] > #   state.
	I1201 21:07:14.738612  521964 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1201 21:07:14.738623  521964 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1201 21:07:14.738629  521964 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1201 21:07:14.738641  521964 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1201 21:07:14.738648  521964 command_runner.go:130] > #   the values from the default runtime on load time.
	I1201 21:07:14.738658  521964 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1201 21:07:14.738675  521964 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1201 21:07:14.738686  521964 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1201 21:07:14.738697  521964 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1201 21:07:14.738706  521964 command_runner.go:130] > #   The currently recognized values are:
	I1201 21:07:14.738713  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1201 21:07:14.738722  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1201 21:07:14.738731  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1201 21:07:14.738737  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1201 21:07:14.738751  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1201 21:07:14.738762  521964 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1201 21:07:14.738774  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1201 21:07:14.738785  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1201 21:07:14.738795  521964 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1201 21:07:14.738801  521964 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1201 21:07:14.738814  521964 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1201 21:07:14.738830  521964 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1201 21:07:14.738841  521964 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1201 21:07:14.738847  521964 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1201 21:07:14.738857  521964 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1201 21:07:14.738871  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1201 21:07:14.738878  521964 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1201 21:07:14.738885  521964 command_runner.go:130] > #   deprecated option "conmon".
	I1201 21:07:14.738904  521964 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1201 21:07:14.738913  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1201 21:07:14.738921  521964 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1201 21:07:14.738930  521964 command_runner.go:130] > #   should be moved to the container's cgroup
	I1201 21:07:14.738937  521964 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1201 21:07:14.738949  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1201 21:07:14.738961  521964 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1201 21:07:14.738974  521964 command_runner.go:130] > #   conmon-rs by using:
	I1201 21:07:14.738982  521964 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1201 21:07:14.738996  521964 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1201 21:07:14.739008  521964 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1201 21:07:14.739024  521964 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1201 21:07:14.739033  521964 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1201 21:07:14.739040  521964 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1201 21:07:14.739057  521964 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1201 21:07:14.739067  521964 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1201 21:07:14.739077  521964 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1201 21:07:14.739089  521964 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1201 21:07:14.739097  521964 command_runner.go:130] > #   when a machine crash happens.
	I1201 21:07:14.739105  521964 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1201 21:07:14.739117  521964 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1201 21:07:14.739152  521964 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1201 21:07:14.739158  521964 command_runner.go:130] > #   seccomp profile for the runtime.
	I1201 21:07:14.739165  521964 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1201 21:07:14.739172  521964 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1201 21:07:14.739175  521964 command_runner.go:130] > #
	I1201 21:07:14.739179  521964 command_runner.go:130] > # Using the seccomp notifier feature:
	I1201 21:07:14.739182  521964 command_runner.go:130] > #
	I1201 21:07:14.739188  521964 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1201 21:07:14.739195  521964 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1201 21:07:14.739204  521964 command_runner.go:130] > #
	I1201 21:07:14.739211  521964 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1201 21:07:14.739217  521964 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1201 21:07:14.739220  521964 command_runner.go:130] > #
	I1201 21:07:14.739225  521964 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1201 21:07:14.739228  521964 command_runner.go:130] > # feature.
	I1201 21:07:14.739231  521964 command_runner.go:130] > #
	I1201 21:07:14.739237  521964 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1201 21:07:14.739247  521964 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1201 21:07:14.739257  521964 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1201 21:07:14.739263  521964 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1201 21:07:14.739270  521964 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1201 21:07:14.739281  521964 command_runner.go:130] > #
	I1201 21:07:14.739288  521964 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1201 21:07:14.739293  521964 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1201 21:07:14.739296  521964 command_runner.go:130] > #
	I1201 21:07:14.739302  521964 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1201 21:07:14.739308  521964 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1201 21:07:14.739310  521964 command_runner.go:130] > #
	I1201 21:07:14.739316  521964 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1201 21:07:14.739322  521964 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1201 21:07:14.739325  521964 command_runner.go:130] > # limitation.
	I1201 21:07:14.739329  521964 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1201 21:07:14.739334  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1201 21:07:14.739337  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739341  521964 command_runner.go:130] > runtime_root = "/run/crun"
	I1201 21:07:14.739345  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739356  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739360  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739365  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739369  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739373  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739380  521964 command_runner.go:130] > allowed_annotations = [
	I1201 21:07:14.739384  521964 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1201 21:07:14.739391  521964 command_runner.go:130] > ]
	I1201 21:07:14.739396  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739400  521964 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1201 21:07:14.739409  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1201 21:07:14.739413  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739420  521964 command_runner.go:130] > runtime_root = "/run/runc"
	I1201 21:07:14.739434  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739442  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739450  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739455  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739459  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739465  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739470  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739481  521964 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1201 21:07:14.739490  521964 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1201 21:07:14.739507  521964 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1201 21:07:14.739519  521964 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1201 21:07:14.739534  521964 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1201 21:07:14.739546  521964 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1201 21:07:14.739559  521964 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1201 21:07:14.739569  521964 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1201 21:07:14.739589  521964 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1201 21:07:14.739601  521964 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1201 21:07:14.739616  521964 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1201 21:07:14.739627  521964 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1201 21:07:14.739635  521964 command_runner.go:130] > # Example:
	I1201 21:07:14.739639  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1201 21:07:14.739652  521964 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1201 21:07:14.739663  521964 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1201 21:07:14.739669  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1201 21:07:14.739672  521964 command_runner.go:130] > # cpuset = "0-1"
	I1201 21:07:14.739681  521964 command_runner.go:130] > # cpushares = "5"
	I1201 21:07:14.739685  521964 command_runner.go:130] > # cpuquota = "1000"
	I1201 21:07:14.739694  521964 command_runner.go:130] > # cpuperiod = "100000"
	I1201 21:07:14.739698  521964 command_runner.go:130] > # cpulimit = "35"
	I1201 21:07:14.739705  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.739709  521964 command_runner.go:130] > # The workload name is workload-type.
	I1201 21:07:14.739716  521964 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1201 21:07:14.739728  521964 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1201 21:07:14.739739  521964 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1201 21:07:14.739752  521964 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1201 21:07:14.739762  521964 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1201 21:07:14.739768  521964 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1201 21:07:14.739778  521964 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1201 21:07:14.739786  521964 command_runner.go:130] > # Default value is set to true
	I1201 21:07:14.739791  521964 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1201 21:07:14.739803  521964 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1201 21:07:14.739813  521964 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1201 21:07:14.739818  521964 command_runner.go:130] > # Default value is set to 'false'
	I1201 21:07:14.739822  521964 command_runner.go:130] > # disable_hostport_mapping = false
	I1201 21:07:14.739830  521964 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1201 21:07:14.739839  521964 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1201 21:07:14.739846  521964 command_runner.go:130] > # timezone = ""
	I1201 21:07:14.739853  521964 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1201 21:07:14.739859  521964 command_runner.go:130] > #
	I1201 21:07:14.739866  521964 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1201 21:07:14.739884  521964 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1201 21:07:14.739892  521964 command_runner.go:130] > [crio.image]
	I1201 21:07:14.739898  521964 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1201 21:07:14.739903  521964 command_runner.go:130] > # default_transport = "docker://"
	I1201 21:07:14.739913  521964 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1201 21:07:14.739919  521964 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739926  521964 command_runner.go:130] > # global_auth_file = ""
	I1201 21:07:14.739931  521964 command_runner.go:130] > # The image used to instantiate infra containers.
	I1201 21:07:14.739940  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739952  521964 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.739964  521964 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1201 21:07:14.739973  521964 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739979  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739986  521964 command_runner.go:130] > # pause_image_auth_file = ""
	I1201 21:07:14.739993  521964 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1201 21:07:14.740002  521964 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1201 21:07:14.740009  521964 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1201 21:07:14.740029  521964 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1201 21:07:14.740037  521964 command_runner.go:130] > # pause_command = "/pause"
	I1201 21:07:14.740044  521964 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1201 21:07:14.740053  521964 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1201 21:07:14.740060  521964 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1201 21:07:14.740070  521964 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1201 21:07:14.740076  521964 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1201 21:07:14.740086  521964 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1201 21:07:14.740091  521964 command_runner.go:130] > # pinned_images = [
	I1201 21:07:14.740093  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740110  521964 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1201 21:07:14.740121  521964 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1201 21:07:14.740133  521964 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1201 21:07:14.740143  521964 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1201 21:07:14.740153  521964 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1201 21:07:14.740158  521964 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1201 21:07:14.740167  521964 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1201 21:07:14.740181  521964 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1201 21:07:14.740204  521964 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1201 21:07:14.740215  521964 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1201 21:07:14.740226  521964 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1201 21:07:14.740236  521964 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1201 21:07:14.740243  521964 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1201 21:07:14.740259  521964 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1201 21:07:14.740263  521964 command_runner.go:130] > # changing them here.
	I1201 21:07:14.740273  521964 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1201 21:07:14.740278  521964 command_runner.go:130] > # insecure_registries = [
	I1201 21:07:14.740285  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740293  521964 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1201 21:07:14.740302  521964 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1201 21:07:14.740306  521964 command_runner.go:130] > # image_volumes = "mkdir"
	I1201 21:07:14.740316  521964 command_runner.go:130] > # Temporary directory to use for storing big files
	I1201 21:07:14.740321  521964 command_runner.go:130] > # big_files_temporary_dir = ""
	I1201 21:07:14.740340  521964 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1201 21:07:14.740349  521964 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1201 21:07:14.740358  521964 command_runner.go:130] > # auto_reload_registries = false
	I1201 21:07:14.740364  521964 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1201 21:07:14.740376  521964 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1201 21:07:14.740387  521964 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1201 21:07:14.740391  521964 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1201 21:07:14.740399  521964 command_runner.go:130] > # The mode of short name resolution.
	I1201 21:07:14.740415  521964 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1201 21:07:14.740423  521964 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1201 21:07:14.740428  521964 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1201 21:07:14.740436  521964 command_runner.go:130] > # short_name_mode = "enforcing"
	I1201 21:07:14.740443  521964 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1201 21:07:14.740453  521964 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1201 21:07:14.740462  521964 command_runner.go:130] > # oci_artifact_mount_support = true
	I1201 21:07:14.740469  521964 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1201 21:07:14.740484  521964 command_runner.go:130] > # CNI plugins.
	I1201 21:07:14.740492  521964 command_runner.go:130] > [crio.network]
	I1201 21:07:14.740498  521964 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1201 21:07:14.740504  521964 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1201 21:07:14.740512  521964 command_runner.go:130] > # cni_default_network = ""
	I1201 21:07:14.740519  521964 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1201 21:07:14.740530  521964 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1201 21:07:14.740540  521964 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1201 21:07:14.740549  521964 command_runner.go:130] > # plugin_dirs = [
	I1201 21:07:14.740562  521964 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1201 21:07:14.740566  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740576  521964 command_runner.go:130] > # List of included pod metrics.
	I1201 21:07:14.740580  521964 command_runner.go:130] > # included_pod_metrics = [
	I1201 21:07:14.740583  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740588  521964 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1201 21:07:14.740596  521964 command_runner.go:130] > [crio.metrics]
	I1201 21:07:14.740602  521964 command_runner.go:130] > # Globally enable or disable metrics support.
	I1201 21:07:14.740614  521964 command_runner.go:130] > # enable_metrics = false
	I1201 21:07:14.740622  521964 command_runner.go:130] > # Specify enabled metrics collectors.
	I1201 21:07:14.740637  521964 command_runner.go:130] > # Per default all metrics are enabled.
	I1201 21:07:14.740644  521964 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1201 21:07:14.740655  521964 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1201 21:07:14.740662  521964 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1201 21:07:14.740666  521964 command_runner.go:130] > # metrics_collectors = [
	I1201 21:07:14.740674  521964 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1201 21:07:14.740680  521964 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1201 21:07:14.740688  521964 command_runner.go:130] > # 	"containers_oom_total",
	I1201 21:07:14.740692  521964 command_runner.go:130] > # 	"processes_defunct",
	I1201 21:07:14.740706  521964 command_runner.go:130] > # 	"operations_total",
	I1201 21:07:14.740714  521964 command_runner.go:130] > # 	"operations_latency_seconds",
	I1201 21:07:14.740719  521964 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1201 21:07:14.740727  521964 command_runner.go:130] > # 	"operations_errors_total",
	I1201 21:07:14.740731  521964 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1201 21:07:14.740736  521964 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1201 21:07:14.740740  521964 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1201 21:07:14.740748  521964 command_runner.go:130] > # 	"image_pulls_success_total",
	I1201 21:07:14.740753  521964 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1201 21:07:14.740761  521964 command_runner.go:130] > # 	"containers_oom_count_total",
	I1201 21:07:14.740766  521964 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1201 21:07:14.740780  521964 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1201 21:07:14.740789  521964 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1201 21:07:14.740792  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740803  521964 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1201 21:07:14.740807  521964 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1201 21:07:14.740812  521964 command_runner.go:130] > # The port on which the metrics server will listen.
	I1201 21:07:14.740816  521964 command_runner.go:130] > # metrics_port = 9090
	I1201 21:07:14.740825  521964 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1201 21:07:14.740829  521964 command_runner.go:130] > # metrics_socket = ""
	I1201 21:07:14.740839  521964 command_runner.go:130] > # The certificate for the secure metrics server.
	I1201 21:07:14.740846  521964 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1201 21:07:14.740867  521964 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1201 21:07:14.740879  521964 command_runner.go:130] > # certificate on any modification event.
	I1201 21:07:14.740883  521964 command_runner.go:130] > # metrics_cert = ""
	I1201 21:07:14.740888  521964 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1201 21:07:14.740897  521964 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1201 21:07:14.740901  521964 command_runner.go:130] > # metrics_key = ""
	I1201 21:07:14.740912  521964 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1201 21:07:14.740916  521964 command_runner.go:130] > [crio.tracing]
	I1201 21:07:14.740933  521964 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1201 21:07:14.740941  521964 command_runner.go:130] > # enable_tracing = false
	I1201 21:07:14.740946  521964 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1201 21:07:14.740959  521964 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1201 21:07:14.740966  521964 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1201 21:07:14.740970  521964 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1201 21:07:14.740975  521964 command_runner.go:130] > # CRI-O NRI configuration.
	I1201 21:07:14.740982  521964 command_runner.go:130] > [crio.nri]
	I1201 21:07:14.740987  521964 command_runner.go:130] > # Globally enable or disable NRI.
	I1201 21:07:14.740993  521964 command_runner.go:130] > # enable_nri = true
	I1201 21:07:14.741004  521964 command_runner.go:130] > # NRI socket to listen on.
	I1201 21:07:14.741013  521964 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1201 21:07:14.741018  521964 command_runner.go:130] > # NRI plugin directory to use.
	I1201 21:07:14.741026  521964 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1201 21:07:14.741031  521964 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1201 21:07:14.741039  521964 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1201 21:07:14.741046  521964 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1201 21:07:14.741111  521964 command_runner.go:130] > # nri_disable_connections = false
	I1201 21:07:14.741122  521964 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1201 21:07:14.741131  521964 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1201 21:07:14.741137  521964 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1201 21:07:14.741142  521964 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1201 21:07:14.741156  521964 command_runner.go:130] > # NRI default validator configuration.
	I1201 21:07:14.741167  521964 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1201 21:07:14.741178  521964 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1201 21:07:14.741190  521964 command_runner.go:130] > # can be restricted/rejected:
	I1201 21:07:14.741198  521964 command_runner.go:130] > # - OCI hook injection
	I1201 21:07:14.741206  521964 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1201 21:07:14.741214  521964 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1201 21:07:14.741218  521964 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1201 21:07:14.741229  521964 command_runner.go:130] > # - adjustment of linux namespaces
	I1201 21:07:14.741241  521964 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1201 21:07:14.741252  521964 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1201 21:07:14.741262  521964 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1201 21:07:14.741268  521964 command_runner.go:130] > #
	I1201 21:07:14.741276  521964 command_runner.go:130] > # [crio.nri.default_validator]
	I1201 21:07:14.741281  521964 command_runner.go:130] > # nri_enable_default_validator = false
	I1201 21:07:14.741290  521964 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1201 21:07:14.741295  521964 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1201 21:07:14.741308  521964 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1201 21:07:14.741318  521964 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1201 21:07:14.741323  521964 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1201 21:07:14.741331  521964 command_runner.go:130] > # nri_validator_required_plugins = [
	I1201 21:07:14.741334  521964 command_runner.go:130] > # ]
	I1201 21:07:14.741344  521964 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1201 21:07:14.741350  521964 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1201 21:07:14.741357  521964 command_runner.go:130] > [crio.stats]
	I1201 21:07:14.741364  521964 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1201 21:07:14.741379  521964 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1201 21:07:14.741384  521964 command_runner.go:130] > # stats_collection_period = 0
	I1201 21:07:14.741390  521964 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1201 21:07:14.741400  521964 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1201 21:07:14.741409  521964 command_runner.go:130] > # collection_period = 0
	I1201 21:07:14.743695  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701489723Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1201 21:07:14.743741  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701919228Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1201 21:07:14.743753  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702192379Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1201 21:07:14.743761  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.70239116Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1201 21:07:14.743770  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702743464Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.743783  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.703251326Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1201 21:07:14.743797  521964 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1201 21:07:14.743882  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:14.743892  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:14.743907  521964 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:07:14.743929  521964 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:07:14.744055  521964 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:07:14.744124  521964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:07:14.751405  521964 command_runner.go:130] > kubeadm
	I1201 21:07:14.751425  521964 command_runner.go:130] > kubectl
	I1201 21:07:14.751429  521964 command_runner.go:130] > kubelet
	I1201 21:07:14.752384  521964 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:07:14.752448  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:07:14.760026  521964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:07:14.773137  521964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:07:14.786891  521964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1201 21:07:14.799994  521964 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:07:14.803501  521964 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1201 21:07:14.803615  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.920306  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:15.405274  521964 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:07:15.405300  521964 certs.go:195] generating shared ca certs ...
	I1201 21:07:15.405343  521964 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:15.405542  521964 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:07:15.405589  521964 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:07:15.405597  521964 certs.go:257] generating profile certs ...
	I1201 21:07:15.405726  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:07:15.405806  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:07:15.405849  521964 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:07:15.405858  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1201 21:07:15.405870  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1201 21:07:15.405880  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1201 21:07:15.405895  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1201 21:07:15.405908  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1201 21:07:15.405920  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1201 21:07:15.405931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1201 21:07:15.405941  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1201 21:07:15.406006  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:07:15.406049  521964 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:07:15.406068  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:07:15.406113  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:07:15.406137  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:07:15.406172  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:07:15.406237  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:15.406287  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem -> /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.406308  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.406325  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.407085  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:07:15.435325  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:07:15.460453  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:07:15.484820  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:07:15.503541  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:07:15.522001  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:07:15.540074  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:07:15.557935  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:07:15.576709  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:07:15.595484  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:07:15.614431  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:07:15.632609  521964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:07:15.645463  521964 ssh_runner.go:195] Run: openssl version
	I1201 21:07:15.651732  521964 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1201 21:07:15.652120  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:07:15.660522  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664099  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664137  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664196  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.704899  521964 command_runner.go:130] > 51391683
	I1201 21:07:15.705348  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:07:15.713374  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:07:15.721756  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725563  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725613  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725662  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.766341  521964 command_runner.go:130] > 3ec20f2e
	I1201 21:07:15.766756  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:07:15.774531  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:07:15.784868  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788871  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788929  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788991  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.829962  521964 command_runner.go:130] > b5213941
	I1201 21:07:15.830101  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:07:15.838399  521964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842255  521964 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842282  521964 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1201 21:07:15.842289  521964 command_runner.go:130] > Device: 259,1	Inode: 2345358     Links: 1
	I1201 21:07:15.842296  521964 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:15.842308  521964 command_runner.go:130] > Access: 2025-12-01 21:03:07.261790641 +0000
	I1201 21:07:15.842313  521964 command_runner.go:130] > Modify: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842318  521964 command_runner.go:130] > Change: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842324  521964 command_runner.go:130] >  Birth: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842405  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:07:15.883885  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.884377  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:07:15.925029  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.925488  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:07:15.967363  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.967505  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:07:16.008933  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.009470  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:07:16.052395  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.052881  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:07:16.094441  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.094868  521964 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:16.094970  521964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:07:16.095033  521964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:07:16.122671  521964 cri.go:89] found id: ""
	I1201 21:07:16.122745  521964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:07:16.129629  521964 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1201 21:07:16.129704  521964 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1201 21:07:16.129749  521964 command_runner.go:130] > /var/lib/minikube/etcd:
	I1201 21:07:16.130618  521964 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:07:16.130634  521964 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:07:16.130700  521964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:07:16.138263  521964 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:07:16.138690  521964 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-198694" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.138796  521964 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-482752/kubeconfig needs updating (will repair): [kubeconfig missing "functional-198694" cluster setting kubeconfig missing "functional-198694" context setting]
	I1201 21:07:16.139097  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.139560  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.139697  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.140229  521964 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 21:07:16.140256  521964 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 21:07:16.140265  521964 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 21:07:16.140270  521964 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 21:07:16.140285  521964 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 21:07:16.140581  521964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:07:16.140673  521964 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1201 21:07:16.148484  521964 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1201 21:07:16.148518  521964 kubeadm.go:602] duration metric: took 17.877938ms to restartPrimaryControlPlane
	I1201 21:07:16.148528  521964 kubeadm.go:403] duration metric: took 53.667619ms to StartCluster
	I1201 21:07:16.148545  521964 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.148604  521964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.149244  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.149450  521964 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 21:07:16.149837  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:16.149887  521964 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 21:07:16.149959  521964 addons.go:70] Setting storage-provisioner=true in profile "functional-198694"
	I1201 21:07:16.149971  521964 addons.go:239] Setting addon storage-provisioner=true in "functional-198694"
	I1201 21:07:16.149997  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.150469  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.150813  521964 addons.go:70] Setting default-storageclass=true in profile "functional-198694"
	I1201 21:07:16.150847  521964 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-198694"
	I1201 21:07:16.151095  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.157800  521964 out.go:179] * Verifying Kubernetes components...
	I1201 21:07:16.160495  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:16.191854  521964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 21:07:16.194709  521964 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.194728  521964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 21:07:16.194804  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.200857  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.201020  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.201620  521964 addons.go:239] Setting addon default-storageclass=true in "functional-198694"
	I1201 21:07:16.201664  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.202447  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.245603  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.261120  521964 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:16.261144  521964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 21:07:16.261216  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.294119  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.373164  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:16.408855  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.445769  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.156317  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156488  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156559  521964 retry.go:31] will retry after 323.483538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156628  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156673  521964 retry.go:31] will retry after 132.387182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156540  521964 node_ready.go:35] waiting up to 6m0s for node "functional-198694" to be "Ready" ...
	I1201 21:07:17.156859  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.156951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.289607  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.345927  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.349389  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.349423  521964 retry.go:31] will retry after 369.598465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.480797  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.537300  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.541071  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.541105  521964 retry.go:31] will retry after 250.665906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.657414  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.657490  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.657803  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.720223  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.783305  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.783341  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.783362  521964 retry.go:31] will retry after 375.003536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.792548  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.854946  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.854989  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.855009  521964 retry.go:31] will retry after 643.882626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.157670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.158003  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:18.159267  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:18.225579  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.225683  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.225726  521964 retry.go:31] will retry after 1.172405999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.500161  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:18.566908  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.566958  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.566979  521964 retry.go:31] will retry after 1.221518169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.657190  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.657601  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.157332  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.157408  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.157736  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:19.157807  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:19.398291  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:19.478299  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.478401  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.478424  521964 retry.go:31] will retry after 725.636222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.657755  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.658075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.789414  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:19.847191  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.847229  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.847250  521964 retry.go:31] will retry after 688.680113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.157514  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.157586  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.157835  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:20.205210  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:20.265409  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.265448  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.265467  521964 retry.go:31] will retry after 1.46538703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.536913  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:20.597058  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.597109  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.597130  521964 retry.go:31] will retry after 1.65793185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.657434  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.657509  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.657856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.157726  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.157805  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.158133  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:21.158204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:21.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.657048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.731621  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:21.794486  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:21.794526  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:21.794546  521964 retry.go:31] will retry after 2.907930062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:22.255851  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:22.319449  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:22.319491  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.319511  521964 retry.go:31] will retry after 2.874628227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.157294  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:23.657472  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:24.157139  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.157221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.157543  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.657245  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.657316  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.657622  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.702795  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:24.765996  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:24.766044  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:24.766064  521964 retry.go:31] will retry after 4.286350529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.157658  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.157735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.158024  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:25.194368  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:25.250297  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:25.253946  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.253992  521964 retry.go:31] will retry after 4.844090269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.657643  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.657986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:25.658042  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:26.157893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.157964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.158227  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:26.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.657225  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.657521  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.657272  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:28.156970  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.157420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:28.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:28.657148  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.657244  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.657592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.053156  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:29.109834  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:29.112973  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.113004  521964 retry.go:31] will retry after 7.544668628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.157507  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.657043  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:30.099244  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:30.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.157941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.158210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:30.158254  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:30.164980  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:30.165032  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.165052  521964 retry.go:31] will retry after 3.932491359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.657621  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.657701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.657964  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.157809  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.657377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.157020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.656981  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:32.657449  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:33.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:33.657102  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.657175  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.097811  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:34.156372  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:34.156417  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.156437  521964 retry.go:31] will retry after 10.974576666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.157589  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.157652  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.157912  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.657701  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.657780  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.658097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:34.658164  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:35.157826  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.157905  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.158165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:35.656910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.656988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.157319  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.157409  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.657573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.657912  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:36.658034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.730483  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:36.730533  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:36.730554  521964 retry.go:31] will retry after 6.063500375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:37.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.157097  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:37.157505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:37.657206  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.657296  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.657631  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.157704  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.157772  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.158095  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.657966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.658289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.156875  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.156971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.157322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.657329  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:39.657378  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:40.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:40.657085  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.657161  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.157198  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.157267  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.657708  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.658115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:41.658168  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:42.157124  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.157211  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.157646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.657398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.794843  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:42.853617  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:42.853659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:42.853680  521964 retry.go:31] will retry after 14.65335173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:43.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:43.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:44.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:44.157384  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:44.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.131211  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:45.157806  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.157891  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.221334  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:45.221384  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.221409  521964 retry.go:31] will retry after 11.551495399s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:46.157214  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.157292  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.157581  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:46.157642  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:46.657575  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.657647  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.657977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.157285  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.157350  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.157647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.156986  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.657048  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.657118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:48.657502  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:49.157019  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.157102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.157404  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:49.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.657208  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.657513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.156941  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.157013  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.157268  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.657077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.657401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:51.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:51.157620  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:51.657409  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.657480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.657812  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.157619  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.157701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.158034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.657819  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.657897  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.658222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:53.157452  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.157532  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.157789  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:53.157829  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:53.657659  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.657737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.658067  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.157887  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.157963  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.158311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.656941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.657207  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.156998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.656937  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:55.657445  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:56.157203  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.157283  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.157556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.657510  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.657589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.657925  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.773160  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:56.828599  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:56.831983  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:56.832017  521964 retry.go:31] will retry after 19.593958555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.157556  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.157632  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.157962  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:57.507290  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:57.561691  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:57.565020  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.565054  521964 retry.go:31] will retry after 13.393925675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.657318  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.657711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:57.657760  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:58.157573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.157646  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.157951  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:58.657731  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.657806  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.658143  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.157844  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.158113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.657909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.657992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.658327  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:59.658388  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:00.157067  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.157155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:00.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.656981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.657427  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:02.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.157192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.157450  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:02.157491  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:02.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.156950  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.657724  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.658043  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:04.157851  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.157926  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:04.158353  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:04.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.656984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.657308  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.159258  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.159335  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.159644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.157419  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.157493  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.157828  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.657668  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.657743  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.658026  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:06.658074  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:07.157783  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.157860  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.158171  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:07.656931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.657012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.657345  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.157032  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.157106  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.157464  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.657254  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:09.157296  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.157697  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:09.157750  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:09.657059  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.156962  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.157037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.157365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.656967  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.657051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.960044  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:11.016321  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:11.019785  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.019824  521964 retry.go:31] will retry after 44.695855679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.156928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.157315  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:11.657003  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:11.657463  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:12.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.157770  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:12.657058  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.657388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.657169  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.657467  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:13.657512  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:14.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.157012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:14.657025  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.657098  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.157163  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.157273  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.657300  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.657393  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:15.657762  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:16.157618  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.158073  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:16.426568  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:16.504541  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:16.504580  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.504599  521964 retry.go:31] will retry after 41.569353087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.657931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.658002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.658310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.156879  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.156968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.157222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.657405  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:18.157142  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.157229  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.157610  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:18.157665  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:18.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.657865  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.658174  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.156967  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.157284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.657096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.657452  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.657458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:20.657526  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:21.157000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:21.656883  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.656968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.657320  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.157049  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.157135  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.157505  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.657283  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.657387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.657820  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:22.657893  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:23.157642  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.157715  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.157983  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:23.657627  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.657716  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.658152  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.157478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.657185  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.657275  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.657653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:25.157527  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.157631  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.158006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:25.158072  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:25.657861  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.157315  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.157387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.157664  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.657761  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.657845  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.658250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.657204  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.657277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:27.657627  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:28.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.157095  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.157476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:28.657072  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.657162  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.657537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.157417  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.157501  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.157799  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.657718  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.657811  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.658220  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:29.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:30.156978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.157057  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:30.656889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.656971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.657275  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.157026  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.157118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:32.157753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.157835  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.158232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:32.158291  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:32.657000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.657475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.157220  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.157305  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.157692  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.657487  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.157729  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.157800  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.656912  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:34.657482  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:35.157152  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.157546  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:35.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.157282  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.157367  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.157727  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.657599  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.657686  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.657988  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:36.658045  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:37.157802  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.157896  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.158276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:37.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.657119  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.157842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.158130  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.657916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.657997  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.658359  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:38.658421  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:39.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:39.657230  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.657317  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.657685  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.157525  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.157997  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.657880  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.657968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.658348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:41.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.157382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:41.157447  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:41.657680  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.657767  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.658134  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.157525  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.657312  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:43.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.157479  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:43.157548  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:43.657235  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.657325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.657683  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.157581  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.158002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.657915  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.658331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:45.157080  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:45.157719  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:45.656935  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.657016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.657311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.157385  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.157475  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.157855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.657753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.657842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:47.157536  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.157944  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:47.157998  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:47.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.657826  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.658196  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.157876  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.157958  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.158348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.657375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.657287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.657715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:49.657793  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:50.157561  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.157644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.157981  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:50.657775  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.658229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.156948  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.656916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.656999  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.657330  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:52.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.157094  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:52.157551  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:52.657260  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.657345  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.157505  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.157589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.157948  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.657814  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.657901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.658274  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.157033  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.157120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.157494  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.657829  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.658226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:54.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:55.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.657040  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.657127  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.716783  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:55.791498  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795332  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795559  521964 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:56.157158  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.157619  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:56.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.658038  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:57.157909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.157989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.158351  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:57.158413  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:57.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.656992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.074174  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:58.149106  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149168  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149265  521964 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:58.152649  521964 out.go:179] * Enabled addons: 
	I1201 21:08:58.156383  521964 addons.go:530] duration metric: took 1m42.00648536s for enable addons: enabled=[]
	I1201 21:08:58.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.157352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.157737  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.657670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.658025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.157338  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.157435  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.658051  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:59.658126  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:00.157924  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.158055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.158429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:00.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.157113  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.157519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.657045  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.657523  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:02.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.157730  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:02.157812  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:02.657697  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.658264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.157016  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.157506  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.656940  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.657317  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.157621  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.657376  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.657464  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.657841  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:04.657911  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:05.157626  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.158028  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:05.657928  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.658022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.658411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.157283  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.157384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.157756  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.657421  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.657507  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.657800  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:07.157695  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.157786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.158194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:07.158265  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:07.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.657425  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.157836  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.158191  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.657104  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.657023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.657120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:09.657606  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:10.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.157086  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.157484  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:10.657232  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.657327  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.657688  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.157620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.157927  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.656987  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:12.157102  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.157196  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:12.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:12.657123  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.657203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.157438  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.657049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:14.157820  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.158213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:14.158267  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.157262  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.657581  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.657709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.658011  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.157709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.657457  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.657635  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.658136  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:16.658210  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:17.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.157017  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.157412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:17.657169  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.657255  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.657728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.157890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.158292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:19.157017  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.157103  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:19.157588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:19.657290  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.657384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.657811  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.157631  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.157730  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.158033  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.657806  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.657889  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.658276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.157070  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.157465  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.657335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:21.657390  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:22.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.157477  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:22.657014  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.657111  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.657539  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.157195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.657519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:23.657588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:24.157112  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.157201  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.157599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:24.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.657673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.657225  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.657322  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:25.657784  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:26.157490  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.157896  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:26.657062  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.657152  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.656936  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.657384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:28.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.157101  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.157533  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:28.157613  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:28.657356  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.657444  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.657855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.157718  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.158017  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.657847  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.658379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:30.157140  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.157673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:30.157765  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:30.657527  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.657947  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.157843  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.157942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.158394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.657184  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.657662  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:32.157380  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.157463  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.157761  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:32.157813  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:32.657593  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.657683  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.658044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.157900  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.157992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.158384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.656918  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.657277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.156983  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:34.657466  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:35.157073  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.157156  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:35.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.657088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.157396  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.157480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.157836  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.657834  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:36.658171  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:37.156863  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.156942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.157295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:37.657055  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.657144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.657495  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.156908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.157238  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.657402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:39.157119  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.157202  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.157574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:39.157635  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:39.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.656951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.156899  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.157303  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.656905  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.656985  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.657322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:41.157534  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.157609  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:41.157915  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:41.657857  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.658297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.157048  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.157140  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.157537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.657274  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.657353  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.657634  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.157360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:43.657439  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:44.157645  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.157713  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.157985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:44.657826  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.657923  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.658392  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.157027  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.157125  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.157611  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.656842  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.656917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.657187  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:46.157288  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.157362  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.157699  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:46.157757  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:46.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.657642  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.658013  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.157757  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.158112  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.657894  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.657972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.157083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.657654  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.657937  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:48.657979  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:49.157706  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.157785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:49.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.657921  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.658333  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.156929  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.157000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.157277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:51.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.157528  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:51.157583  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:51.656908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.656978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.657247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.157355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.657082  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.657488  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.157030  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.157430  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.656984  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.657399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:53.657456  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:54.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:54.657665  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.657741  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.658010  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:56.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.157246  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.157570  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:56.157631  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:56.657418  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.657498  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.657830  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.157641  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.157734  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.158097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.657841  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.657910  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.156868  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.156944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:58.657513  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:59.157748  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.157815  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.158119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:59.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.656934  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.657255  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.182510  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.182611  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.182943  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.657771  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.657850  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.658154  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:00.658206  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:01.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.156992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:01.657214  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.657298  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.157865  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.157946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.158249  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.656955  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.657029  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:03.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.157411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:03.157464  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:03.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.657085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.657453  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.657224  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.657551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:05.159263  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.159342  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.159636  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:05.159683  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:05.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.157539  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.157637  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.158058  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.657526  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.657604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.657867  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.157646  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.157727  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.158042  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.657854  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.657935  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.658292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:07.658351  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:08.157603  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.157674  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.157973  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:08.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.657862  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.658197  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.156973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.656947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.657210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:10.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.157076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.157429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:10.157492  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:10.657080  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.657192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.157228  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.657517  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.657597  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:12.157792  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.157864  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:12.158240  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:12.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.656959  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.157415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.657121  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.657199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.657550  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:14.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.157913  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.158250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:14.158314  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.157065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.157428  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.656989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.657251  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.157705  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.657618  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.657700  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:16.658091  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:17.157765  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.157836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:17.657888  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.657971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.658355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.657112  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:19.156976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:19.157452  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:19.657118  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.657191  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.657516  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.157379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.656945  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.657020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.657391  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:21.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.157552  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:21.157608  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:21.657312  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.657400  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.657677  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.156963  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.657368  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.156906  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.157247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.657411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:23.657467  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:24.157128  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.157203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:24.657808  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.657883  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.658178  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.156896  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.156988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.657068  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.657155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:25.657581  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:26.157344  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.157430  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.157711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:26.657676  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.657747  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.658068  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.157849  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.157936  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.158262  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:28.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.156978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.157356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:28.157423  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:28.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.157277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.157661  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.657507  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.657974  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:30.157860  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.157951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.158382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:30.158453  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:30.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.656991  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.157077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.657398  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.657481  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.157604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.157880  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.657746  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.657828  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.658176  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:32.658229  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:33.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.157018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:33.657643  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.657710  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.658006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.157894  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.158278  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.657059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:35.157082  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.157199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:35.157521  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:35.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.657353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.157368  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.157452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.157808  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.657277  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.657352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.657623  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.156972  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.157053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.656998  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.657079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.657415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:37.657471  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:38.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.157242  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:38.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.657036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.157041  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.657723  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.657992  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:39.658033  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:40.157791  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.157881  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.158267  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:40.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.157040  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.157114  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.157371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.657289  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.657371  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.657729  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:42.157592  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.157681  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:42.158193  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:42.657466  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.657542  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.657815  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.157576  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.157658  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.158000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.657674  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.657745  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.658086  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.157304  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.157391  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.657534  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.657625  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.657958  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:44.658013  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:45.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.157928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.158336  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:45.657663  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.657751  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.658031  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.157548  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.157629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.157950  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.657877  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.657952  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.658291  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:46.658347  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:47.156857  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.156933  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.157198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:47.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.157015  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.157423  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.657541  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.657618  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.657936  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:49.157607  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.157694  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.158025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:49.158076  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:49.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.658194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.157521  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.157593  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.157864  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.657707  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.658124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:51.157805  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.157886  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:51.158279  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:51.657127  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.657207  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.657471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.157004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.157305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.656968  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.657379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.156947  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.157022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.157288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.657360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:53.657416  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:54.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.157189  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.157007  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.657242  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.657323  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.657660  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:55.657717  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:56.157590  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.157668  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.157942  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:56.657918  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.657994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.658356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.157377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.657638  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.657712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.657982  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:57.658023  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:58.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.158147  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:58.656879  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.656954  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.157246  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:00.157201  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.157287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:00.157684  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:00.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.658231  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.157426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.156872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.156950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.157232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.656970  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:02.657392  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:03.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:03.656873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.656949  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.657257  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.657086  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.657170  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.657515  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:04.657568  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:05.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.157855  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.158116  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:05.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.657976  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.658256  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.157325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.157672  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.657576  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:06.657957  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:07.157699  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.157770  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.158064  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:07.657781  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.658224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.157367  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.157437  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.657968  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:08.658028  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:09.157829  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.157911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.158288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:09.656917  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.657288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.156991  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.657170  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.657248  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.657599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:11.156833  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.156912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.157200  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:11.157249  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:11.656972  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.657556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.157243  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.157318  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.157669  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.657823  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.657911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.658208  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:13.156933  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.157369  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:13.157434  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:13.657105  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.657190  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.657535  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.157809  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.157875  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.158149  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.657913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.658000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:15.156989  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:15.157479  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:15.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.657004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.657310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.157234  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.157328  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.657344  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.657439  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.657980  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:17.157136  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.157223  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.157592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:17.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:17.657532  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.657620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.657985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.157793  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.157869  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.657332  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.657414  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.657739  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:19.157633  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.157712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.158075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:19.158138  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:19.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.656944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.157129  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.157538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.657069  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.157653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.657489  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.657579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.657887  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:21.657951  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:22.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.157807  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.158188  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:22.656943  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.157143  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.157413  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:24.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.157227  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:24.157604  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:24.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.658165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.657269  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.657598  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:26.157266  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.157339  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.157618  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:26.157661  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:26.657561  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.657639  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.658002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.157818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.157901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.158277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.657008  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.657338  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.157024  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.157108  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.157462  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.657032  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.657112  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:28.657505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:29.157808  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:29.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.157157  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.657451  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.657748  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:30.657794  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:31.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.157692  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.158099  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:31.657089  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.657530  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:33.157120  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:33.157650  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:33.656925  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.657282  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.157085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.657236  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.657650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.156987  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.157331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.657385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:35.657436  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:36.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.157365  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.157713  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.657874  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.658213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.156873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.156946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.656921  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:38.157094  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:38.157537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:38.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.657414  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.157117  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.157513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.656888  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.157358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.657069  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.657148  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:40.657538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:41.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.156983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.157301  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:41.657216  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.657295  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.657644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.157003  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.157475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.657872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.658284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:42.658338  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:43.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.157034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.157374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:43.657103  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.657182  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.156866  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.156937  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.157219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.657376  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:45.157037  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.157482  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:45.157545  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:45.657188  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.657259  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.157054  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.157131  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.157180  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.657093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:47.657462  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:48.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:48.657126  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.657197  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.657487  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.657346  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:50.156927  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.157276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:50.157327  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:50.657022  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:52.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:52.157465  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:52.657158  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.657238  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.156907  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.157259  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.657409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.157400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:54.657346  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:55.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:55.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.657357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.157331  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.157412  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.657721  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:56.658204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:57.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:57.657664  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.657735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.157786  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.157861  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.657007  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.657100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:59.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.157823  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.158141  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:59.158186  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:59.656847  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.656927  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.657290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.157062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.657065  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.657419  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.157080  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.157418  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.657452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:01.657861  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:02.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:02.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.657050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.157177  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.157545  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.657864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.658290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:03.658354  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:04.157043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.157122  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.157481  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:04.657071  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.657150  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.157762  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.158111  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.657870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.658003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.658357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:05.658411  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:06.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.157261  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.157642  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:06.657501  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.657577  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.657845  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.157682  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.157766  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.656894  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.656972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:08.157028  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:08.157437  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:08.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.157160  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.157245  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.657243  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.156932  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.657029  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:10.657537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:11.157239  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.157313  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.157609  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:11.657334  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.657410  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.657733  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.157529  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.157603  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.157977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.657303  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.657379  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.657647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:12.657692  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:13.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.157445  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:13.657161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.657236  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.657560  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.157233  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:15.157135  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.157216  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:15.157629  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:15.657856  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.657928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.658198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.157210  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.157294  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.657580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:17.157664  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.157737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.158007  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:17.158051  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:17.657817  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.657893  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.658321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.157126  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.157218  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.657309  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.657377  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.657641  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.157459  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.157533  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.657700  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.657774  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.658113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:19.658170  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:20.157420  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.157499  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.157831  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:20.657717  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.657790  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.658137  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.156870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.156955  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.157335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.656896  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.656973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.657240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:22.156959  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.157337  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:22.157382  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:22.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.657035  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.657334  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.157240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.657321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:24.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.157353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:24.157404  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:24.657661  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.657744  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.658139  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.156898  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.657004  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.657473  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:26.157364  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.157445  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:26.157767  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:26.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.657820  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.157901  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.157983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.158328  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.657232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.156968  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.157396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.657122  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.657193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.657567  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:28.657618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:29.157156  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.157234  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:29.656952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.156982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.157060  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.657692  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.657762  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.658041  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:30.658082  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:31.157866  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.157947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.158324  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:31.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.157144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.656944  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:33.156964  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.157045  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.157424  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:33.157484  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:33.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.657209  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.157049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.157398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.657117  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.657200  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:35.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.158226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:35.158268  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:35.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.157253  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.157329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.157665  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.657154  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.657221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.657490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.157161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.157578  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.657242  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.657583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:37.657637  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:38.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.156993  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.157311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.157541  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.657246  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.657614  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:40.157008  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.157402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:40.157459  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:40.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.156917  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.157011  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.157297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:42.157169  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.157262  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.157666  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:42.157723  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:42.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.656961  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.156956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.157047  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.657015  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.157261  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.657068  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.657431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:44.657488  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:45.157013  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.157431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:45.657107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.657476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.157495  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.157580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.157930  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.656884  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:47.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.157100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:47.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:47.656956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.157373  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.657325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:49.157039  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.157480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:49.157538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.657039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.657352  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.156960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.157229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.656950  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:51.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:51.157618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:51.657566  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.657641  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.157799  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.157888  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.158264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.657426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:53.157683  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.157769  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.158044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:53.158097  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:53.657845  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.657932  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.156954  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.157044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.657370  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.157133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.157212  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.657404  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.657768  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:55.657823  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:56.157456  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.157537  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.157827  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:56.657750  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.657836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.658210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.657457  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:58.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.157072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:58.157532  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:58.657043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.657124  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.156864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.156938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.157199  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.656974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.657286  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:00.157057  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.157147  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:00.157569  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:00.657428  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.657504  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.157663  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.157764  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.158124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:02.157714  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.157793  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.158080  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:02.158125  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:02.657871  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.658316  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.156973  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.157183  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.657241  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.657321  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.657639  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:04.657698  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:05.156921  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.157001  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.157325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:05.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.657437  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.157391  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.157477  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.157856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.657298  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.657378  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.657684  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:06.657732  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:07.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.157929  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:07.657804  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.658219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.157597  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.157669  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.157933  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.657711  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.657785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.658162  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:08.658217  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:09.156936  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.157375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:09.657620  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.657765  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.157874  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.157960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.158354  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.656946  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.657358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:11.157610  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.157697  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.157986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:11.158031  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:11.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.657296  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.157004  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.657749  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.658023  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:13.157872  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.158289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:13.158341  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:13.656969  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.156916  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.156994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.157319  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.656957  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.657034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.657371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.157084  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.157470  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.656852  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.656945  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:15.657269  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:16.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.157728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:16.657690  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.657781  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.658180  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:17.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:13:17.157257  521964 node_ready.go:38] duration metric: took 6m0.000516111s for node "functional-198694" to be "Ready" ...
	I1201 21:13:17.164775  521964 out.go:203] 
	W1201 21:13:17.167674  521964 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1201 21:13:17.167697  521964 out.go:285] * 
	W1201 21:13:17.169852  521964 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:13:17.172668  521964 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:13:26 functional-198694 crio[5973]: time="2025-12-01T21:13:26.334736475Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=999f49b0-4d8d-487f-b88a-584f3d8d35c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:26 functional-198694 crio[5973]: time="2025-12-01T21:13:26.362325785Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e9e64604-29b3-4230-b132-48cbd4e67a88 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:26 functional-198694 crio[5973]: time="2025-12-01T21:13:26.362484739Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=e9e64604-29b3-4230-b132-48cbd4e67a88 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:26 functional-198694 crio[5973]: time="2025-12-01T21:13:26.362535519Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=e9e64604-29b3-4230-b132-48cbd4e67a88 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.507694356Z" level=info msg="Checking image status: minikube-local-cache-test:functional-198694" id=30062177-52f3-4ebc-ade7-ad4587233858 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.532768374Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-198694" id=5ddb31bb-ac5a-458c-b65b-c53b10e34ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.532927114Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-198694 not found" id=5ddb31bb-ac5a-458c-b65b-c53b10e34ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.532968163Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-198694 found" id=5ddb31bb-ac5a-458c-b65b-c53b10e34ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.558507537Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-198694" id=dd7c7415-feed-41b6-a009-1c6d4a510de4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.558653281Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-198694 not found" id=dd7c7415-feed-41b6-a009-1c6d4a510de4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.558695963Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-198694 found" id=dd7c7415-feed-41b6-a009-1c6d4a510de4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:28 functional-198694 crio[5973]: time="2025-12-01T21:13:28.409502634Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c25612e6-8ceb-43a3-888e-586b437d2001 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:28 functional-198694 crio[5973]: time="2025-12-01T21:13:28.750543623Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=ca4a6a51-43f6-42c4-8e30-4576c872fa28 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:28 functional-198694 crio[5973]: time="2025-12-01T21:13:28.750734871Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=ca4a6a51-43f6-42c4-8e30-4576c872fa28 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:28 functional-198694 crio[5973]: time="2025-12-01T21:13:28.750788252Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=ca4a6a51-43f6-42c4-8e30-4576c872fa28 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.321003044Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5933993d-142c-4792-8c0f-832fcd395510 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.32118013Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5933993d-142c-4792-8c0f-832fcd395510 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.321241716Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5933993d-142c-4792-8c0f-832fcd395510 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.371480366Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f4dda806-f97e-43ff-b429-2175d62d4212 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.371644095Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f4dda806-f97e-43ff-b429-2175d62d4212 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.371698715Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f4dda806-f97e-43ff-b429-2175d62d4212 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.398067293Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1f4fe85d-d48b-4888-942f-bef3d6dcc64a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.398200262Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1f4fe85d-d48b-4888-942f-bef3d6dcc64a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.398236454Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1f4fe85d-d48b-4888-942f-bef3d6dcc64a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.950677554Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5cc173f2-1c95-471a-b9d9-748ad92a53de name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:13:31.589100    9929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:31.589546    9929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:31.591332    9929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:31.592021    9929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:31.593532    9929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:13:31 up  2:56,  0 user,  load average: 0.52, 0.32, 0.60
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:13:29 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:29 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1151.
	Dec 01 21:13:29 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:29 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:29 functional-198694 kubelet[9809]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:29 functional-198694 kubelet[9809]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:29 functional-198694 kubelet[9809]: E1201 21:13:29.963874    9809 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:29 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:29 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:30 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1152.
	Dec 01 21:13:30 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:30 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:30 functional-198694 kubelet[9839]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:30 functional-198694 kubelet[9839]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:30 functional-198694 kubelet[9839]: E1201 21:13:30.716293    9839 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:30 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:30 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:31 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 01 21:13:31 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:31 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:31 functional-198694 kubelet[9899]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:31 functional-198694 kubelet[9899]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:31 functional-198694 kubelet[9899]: E1201 21:13:31.480633    9899 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:31 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:31 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (412.61137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-198694 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-198694 get pods: exit status 1 (119.183386ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-198694 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (368.063982ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 logs -n 25: (1.084555485s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-074555 image ls --format short --alsologtostderr                                                                                       │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls --format yaml --alsologtostderr                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh     │ functional-074555 ssh pgrep buildkitd                                                                                                             │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ image   │ functional-074555 image ls --format json --alsologtostderr                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls --format table --alsologtostderr                                                                                       │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr                                            │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls                                                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ delete  │ -p functional-074555                                                                                                                              │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ start   │ -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ start   │ -p functional-198694 --alsologtostderr -v=8                                                                                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:07 UTC │                     │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:latest                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add minikube-local-cache-test:functional-198694                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache delete minikube-local-cache-test:functional-198694                                                                        │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl images                                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	│ cache   │ functional-198694 cache reload                                                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ kubectl │ functional-198694 kubectl -- --context functional-198694 get pods                                                                                 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:07:11
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:07:11.242920  521964 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:07:11.243351  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243387  521964 out.go:374] Setting ErrFile to fd 2...
	I1201 21:07:11.243410  521964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:07:11.243711  521964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:07:11.244177  521964 out.go:368] Setting JSON to false
	I1201 21:07:11.245066  521964 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10181,"bootTime":1764613051,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:07:11.245167  521964 start.go:143] virtualization:  
	I1201 21:07:11.248721  521964 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:07:11.252584  521964 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:07:11.252676  521964 notify.go:221] Checking for updates...
	I1201 21:07:11.258436  521964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:07:11.261368  521964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:11.264327  521964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:07:11.267307  521964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:07:11.270189  521964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:07:11.273718  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:11.273862  521964 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:07:11.298213  521964 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:07:11.298331  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.359645  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.34998497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.359790  521964 docker.go:319] overlay module found
	I1201 21:07:11.364655  521964 out.go:179] * Using the docker driver based on existing profile
	I1201 21:07:11.367463  521964 start.go:309] selected driver: docker
	I1201 21:07:11.367488  521964 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.367603  521964 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:07:11.367700  521964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:07:11.423386  521964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:07:11.414394313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:07:11.423798  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:11.423867  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:11.423916  521964 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:11.427203  521964 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:07:11.430063  521964 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:07:11.433025  521964 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:07:11.436022  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:11.436110  521964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:07:11.455717  521964 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:07:11.455744  521964 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:07:11.500566  521964 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:07:11.687123  521964 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:07:11.687287  521964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:07:11.687539  521964 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:07:11.687581  521964 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.687647  521964 start.go:364] duration metric: took 33.501µs to acquireMachinesLock for "functional-198694"
	I1201 21:07:11.687664  521964 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:07:11.687669  521964 fix.go:54] fixHost starting: 
	I1201 21:07:11.687932  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:11.688204  521964 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688271  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:07:11.688285  521964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.581µs
	I1201 21:07:11.688306  521964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:07:11.688318  521964 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688354  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:07:11.688367  521964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 50.575µs
	I1201 21:07:11.688373  521964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688390  521964 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688439  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:07:11.688445  521964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 57.213µs
	I1201 21:07:11.688452  521964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688467  521964 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688503  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:07:11.688513  521964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.581µs
	I1201 21:07:11.688520  521964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688529  521964 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688566  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:07:11.688576  521964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 47.712µs
	I1201 21:07:11.688582  521964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:07:11.688591  521964 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688628  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:07:11.688637  521964 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 46.916µs
	I1201 21:07:11.688643  521964 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:07:11.688652  521964 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688684  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:07:11.688693  521964 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 41.952µs
	I1201 21:07:11.688698  521964 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:07:11.688707  521964 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:07:11.688742  521964 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:07:11.688749  521964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 43.527µs
	I1201 21:07:11.688755  521964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:07:11.688763  521964 cache.go:87] Successfully saved all images to host disk.
	I1201 21:07:11.706210  521964 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:07:11.706244  521964 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:07:11.709560  521964 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:07:11.709599  521964 machine.go:94] provisionDockerMachine start ...
	I1201 21:07:11.709692  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.727308  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.727671  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.727690  521964 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:07:11.874686  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:11.874711  521964 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:07:11.874786  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:11.892845  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:11.893165  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:11.893181  521964 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:07:12.052942  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:07:12.053034  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.072030  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.072356  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.072379  521964 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:07:12.227676  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:07:12.227702  521964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:07:12.227769  521964 ubuntu.go:190] setting up certificates
	I1201 21:07:12.227787  521964 provision.go:84] configureAuth start
	I1201 21:07:12.227860  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:12.247353  521964 provision.go:143] copyHostCerts
	I1201 21:07:12.247405  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247445  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:07:12.247463  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:07:12.247541  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:07:12.247639  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247660  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:07:12.247665  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:07:12.247698  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:07:12.247755  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247776  521964 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:07:12.247785  521964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:07:12.247814  521964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:07:12.247874  521964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:07:12.352949  521964 provision.go:177] copyRemoteCerts
	I1201 21:07:12.353031  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:07:12.353075  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.373178  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:12.479006  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1201 21:07:12.479125  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:07:12.496931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1201 21:07:12.497043  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:07:12.515649  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1201 21:07:12.515717  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 21:07:12.533930  521964 provision.go:87] duration metric: took 306.12888ms to configureAuth
	I1201 21:07:12.533957  521964 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:07:12.534156  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:12.534262  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.551972  521964 main.go:143] libmachine: Using SSH client type: native
	I1201 21:07:12.552286  521964 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:07:12.552304  521964 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:07:12.889959  521964 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:07:12.889981  521964 machine.go:97] duration metric: took 1.180373916s to provisionDockerMachine
	I1201 21:07:12.889993  521964 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:07:12.890006  521964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:07:12.890086  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:07:12.890139  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:12.908762  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.018597  521964 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:07:13.022335  521964 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1201 21:07:13.022369  521964 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1201 21:07:13.022376  521964 command_runner.go:130] > VERSION_ID="12"
	I1201 21:07:13.022381  521964 command_runner.go:130] > VERSION="12 (bookworm)"
	I1201 21:07:13.022386  521964 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1201 21:07:13.022390  521964 command_runner.go:130] > ID=debian
	I1201 21:07:13.022396  521964 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1201 21:07:13.022401  521964 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1201 21:07:13.022407  521964 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1201 21:07:13.022493  521964 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:07:13.022513  521964 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:07:13.022526  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:07:13.022584  521964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:07:13.022685  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:07:13.022696  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /etc/ssl/certs/4860022.pem
	I1201 21:07:13.022772  521964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:07:13.022784  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> /etc/test/nested/copy/486002/hosts
	I1201 21:07:13.022828  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:07:13.031305  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:13.050359  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:07:13.069098  521964 start.go:296] duration metric: took 179.090292ms for postStartSetup
	I1201 21:07:13.069200  521964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:07:13.069250  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.087931  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.188150  521964 command_runner.go:130] > 18%
	I1201 21:07:13.188720  521964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:07:13.193507  521964 command_runner.go:130] > 161G
	I1201 21:07:13.195867  521964 fix.go:56] duration metric: took 1.508190835s for fixHost
	I1201 21:07:13.195933  521964 start.go:83] releasing machines lock for "functional-198694", held for 1.508273853s
	I1201 21:07:13.196019  521964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:07:13.216611  521964 ssh_runner.go:195] Run: cat /version.json
	I1201 21:07:13.216667  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.216936  521964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:07:13.216990  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:13.238266  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.249198  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:13.342561  521964 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1201 21:07:13.342766  521964 ssh_runner.go:195] Run: systemctl --version
	I1201 21:07:13.434302  521964 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1201 21:07:13.434432  521964 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1201 21:07:13.434476  521964 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1201 21:07:13.434562  521964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:07:13.473148  521964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1201 21:07:13.477954  521964 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1201 21:07:13.478007  521964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:07:13.478081  521964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:07:13.486513  521964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:07:13.486536  521964 start.go:496] detecting cgroup driver to use...
	I1201 21:07:13.486599  521964 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:07:13.486671  521964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:07:13.502588  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:07:13.515851  521964 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:07:13.515935  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:07:13.531981  521964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:07:13.545612  521964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:07:13.660013  521964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:07:13.783921  521964 docker.go:234] disabling docker service ...
	I1201 21:07:13.783999  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:07:13.801145  521964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:07:13.814790  521964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:07:13.959260  521964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:07:14.082027  521964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:07:14.096899  521964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:07:14.110653  521964 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1201 21:07:14.112111  521964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:07:14.112234  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.121522  521964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:07:14.121606  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.132262  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.141626  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.151111  521964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:07:14.160033  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.169622  521964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.178443  521964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.187976  521964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:07:14.194851  521964 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1201 21:07:14.196003  521964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:07:14.203835  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.312679  521964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:07:14.495171  521964 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:07:14.495301  521964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:07:14.499086  521964 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1201 21:07:14.499110  521964 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1201 21:07:14.499118  521964 command_runner.go:130] > Device: 0,72	Inode: 1746        Links: 1
	I1201 21:07:14.499125  521964 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:14.499150  521964 command_runner.go:130] > Access: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499176  521964 command_runner.go:130] > Modify: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499186  521964 command_runner.go:130] > Change: 2025-12-01 21:07:14.424432171 +0000
	I1201 21:07:14.499190  521964 command_runner.go:130] >  Birth: -
	I1201 21:07:14.499219  521964 start.go:564] Will wait 60s for crictl version
	I1201 21:07:14.499275  521964 ssh_runner.go:195] Run: which crictl
	I1201 21:07:14.502678  521964 command_runner.go:130] > /usr/local/bin/crictl
	I1201 21:07:14.502996  521964 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:07:14.524882  521964 command_runner.go:130] > Version:  0.1.0
	I1201 21:07:14.524906  521964 command_runner.go:130] > RuntimeName:  cri-o
	I1201 21:07:14.524912  521964 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1201 21:07:14.524918  521964 command_runner.go:130] > RuntimeApiVersion:  v1
	I1201 21:07:14.526840  521964 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:07:14.526982  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.553910  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.553933  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.553939  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.553944  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.553950  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.553971  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.553976  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.553980  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.553984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.553987  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.553991  521964 command_runner.go:130] >      static
	I1201 21:07:14.553994  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.553998  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.554001  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.554009  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.554012  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.554016  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.554020  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.554024  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.554028  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.556106  521964 ssh_runner.go:195] Run: crio --version
	I1201 21:07:14.582720  521964 command_runner.go:130] > crio version 1.34.2
	I1201 21:07:14.582784  521964 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1201 21:07:14.582817  521964 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1201 21:07:14.582840  521964 command_runner.go:130] >    GitTreeState:   dirty
	I1201 21:07:14.582863  521964 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1201 21:07:14.582897  521964 command_runner.go:130] >    GoVersion:      go1.24.6
	I1201 21:07:14.582922  521964 command_runner.go:130] >    Compiler:       gc
	I1201 21:07:14.582947  521964 command_runner.go:130] >    Platform:       linux/arm64
	I1201 21:07:14.582984  521964 command_runner.go:130] >    Linkmode:       static
	I1201 21:07:14.583008  521964 command_runner.go:130] >    BuildTags:
	I1201 21:07:14.583029  521964 command_runner.go:130] >      static
	I1201 21:07:14.583063  521964 command_runner.go:130] >      netgo
	I1201 21:07:14.583085  521964 command_runner.go:130] >      osusergo
	I1201 21:07:14.583101  521964 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1201 21:07:14.583121  521964 command_runner.go:130] >      seccomp
	I1201 21:07:14.583170  521964 command_runner.go:130] >      apparmor
	I1201 21:07:14.583196  521964 command_runner.go:130] >      selinux
	I1201 21:07:14.583217  521964 command_runner.go:130] >    LDFlags:          unknown
	I1201 21:07:14.583262  521964 command_runner.go:130] >    SeccompEnabled:   true
	I1201 21:07:14.583287  521964 command_runner.go:130] >    AppArmorEnabled:  false
	I1201 21:07:14.589911  521964 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:07:14.592808  521964 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:07:14.609405  521964 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:07:14.613461  521964 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1201 21:07:14.613638  521964 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:07:14.613753  521964 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:07:14.613807  521964 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:07:14.655721  521964 command_runner.go:130] > {
	I1201 21:07:14.655745  521964 command_runner.go:130] >   "images":  [
	I1201 21:07:14.655750  521964 command_runner.go:130] >     {
	I1201 21:07:14.655758  521964 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1201 21:07:14.655763  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655768  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1201 21:07:14.655771  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655775  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655786  521964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1201 21:07:14.655790  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655794  521964 command_runner.go:130] >       "size":  "29035622",
	I1201 21:07:14.655798  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655803  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655811  521964 command_runner.go:130] >     },
	I1201 21:07:14.655815  521964 command_runner.go:130] >     {
	I1201 21:07:14.655825  521964 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1201 21:07:14.655839  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655846  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1201 21:07:14.655854  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655858  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655866  521964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1201 21:07:14.655871  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655876  521964 command_runner.go:130] >       "size":  "74488375",
	I1201 21:07:14.655880  521964 command_runner.go:130] >       "username":  "nonroot",
	I1201 21:07:14.655884  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655888  521964 command_runner.go:130] >     },
	I1201 21:07:14.655891  521964 command_runner.go:130] >     {
	I1201 21:07:14.655901  521964 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1201 21:07:14.655907  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.655912  521964 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1201 21:07:14.655918  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655927  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.655946  521964 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1201 21:07:14.655955  521964 command_runner.go:130] >       ],
	I1201 21:07:14.655960  521964 command_runner.go:130] >       "size":  "60854229",
	I1201 21:07:14.655965  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.655974  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.655978  521964 command_runner.go:130] >       },
	I1201 21:07:14.655982  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.655986  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.655989  521964 command_runner.go:130] >     },
	I1201 21:07:14.655995  521964 command_runner.go:130] >     {
	I1201 21:07:14.656002  521964 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1201 21:07:14.656010  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656015  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1201 21:07:14.656018  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656024  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656033  521964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1201 21:07:14.656040  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656044  521964 command_runner.go:130] >       "size":  "84947242",
	I1201 21:07:14.656047  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656051  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656061  521964 command_runner.go:130] >       },
	I1201 21:07:14.656065  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656068  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656071  521964 command_runner.go:130] >     },
	I1201 21:07:14.656075  521964 command_runner.go:130] >     {
	I1201 21:07:14.656084  521964 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1201 21:07:14.656090  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656096  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1201 21:07:14.656100  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656106  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656115  521964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1201 21:07:14.656121  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656132  521964 command_runner.go:130] >       "size":  "72167568",
	I1201 21:07:14.656139  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656143  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656146  521964 command_runner.go:130] >       },
	I1201 21:07:14.656150  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656154  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656160  521964 command_runner.go:130] >     },
	I1201 21:07:14.656163  521964 command_runner.go:130] >     {
	I1201 21:07:14.656170  521964 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1201 21:07:14.656176  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656182  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1201 21:07:14.656185  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656209  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656218  521964 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1201 21:07:14.656223  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656228  521964 command_runner.go:130] >       "size":  "74105124",
	I1201 21:07:14.656231  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656236  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656241  521964 command_runner.go:130] >     },
	I1201 21:07:14.656245  521964 command_runner.go:130] >     {
	I1201 21:07:14.656251  521964 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1201 21:07:14.656257  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656262  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1201 21:07:14.656268  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656272  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656279  521964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1201 21:07:14.656285  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656289  521964 command_runner.go:130] >       "size":  "49819792",
	I1201 21:07:14.656293  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656303  521964 command_runner.go:130] >         "value":  "0"
	I1201 21:07:14.656307  521964 command_runner.go:130] >       },
	I1201 21:07:14.656311  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656316  521964 command_runner.go:130] >       "pinned":  false
	I1201 21:07:14.656323  521964 command_runner.go:130] >     },
	I1201 21:07:14.656330  521964 command_runner.go:130] >     {
	I1201 21:07:14.656337  521964 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1201 21:07:14.656341  521964 command_runner.go:130] >       "repoTags":  [
	I1201 21:07:14.656345  521964 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.656350  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656355  521964 command_runner.go:130] >       "repoDigests":  [
	I1201 21:07:14.656365  521964 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1201 21:07:14.656368  521964 command_runner.go:130] >       ],
	I1201 21:07:14.656372  521964 command_runner.go:130] >       "size":  "517328",
	I1201 21:07:14.656378  521964 command_runner.go:130] >       "uid":  {
	I1201 21:07:14.656383  521964 command_runner.go:130] >         "value":  "65535"
	I1201 21:07:14.656388  521964 command_runner.go:130] >       },
	I1201 21:07:14.656392  521964 command_runner.go:130] >       "username":  "",
	I1201 21:07:14.656395  521964 command_runner.go:130] >       "pinned":  true
	I1201 21:07:14.656399  521964 command_runner.go:130] >     }
	I1201 21:07:14.656404  521964 command_runner.go:130] >   ]
	I1201 21:07:14.656408  521964 command_runner.go:130] > }
	I1201 21:07:14.656549  521964 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:07:14.656561  521964 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:07:14.656568  521964 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:07:14.656668  521964 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:07:14.656752  521964 ssh_runner.go:195] Run: crio config
	I1201 21:07:14.734869  521964 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1201 21:07:14.734915  521964 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1201 21:07:14.734928  521964 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1201 21:07:14.734945  521964 command_runner.go:130] > #
	I1201 21:07:14.734957  521964 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1201 21:07:14.734978  521964 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1201 21:07:14.734989  521964 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1201 21:07:14.735001  521964 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1201 21:07:14.735009  521964 command_runner.go:130] > # reload'.
	I1201 21:07:14.735017  521964 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1201 21:07:14.735028  521964 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1201 21:07:14.735038  521964 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1201 21:07:14.735051  521964 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1201 21:07:14.735059  521964 command_runner.go:130] > [crio]
	I1201 21:07:14.735069  521964 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1201 21:07:14.735078  521964 command_runner.go:130] > # containers images, in this directory.
	I1201 21:07:14.735108  521964 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1201 21:07:14.735125  521964 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1201 21:07:14.735149  521964 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1201 21:07:14.735158  521964 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1201 21:07:14.735167  521964 command_runner.go:130] > # imagestore = ""
	I1201 21:07:14.735180  521964 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1201 21:07:14.735200  521964 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1201 21:07:14.735401  521964 command_runner.go:130] > # storage_driver = "overlay"
	I1201 21:07:14.735416  521964 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1201 21:07:14.735422  521964 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1201 21:07:14.735427  521964 command_runner.go:130] > # storage_option = [
	I1201 21:07:14.735430  521964 command_runner.go:130] > # ]
	I1201 21:07:14.735440  521964 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1201 21:07:14.735447  521964 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1201 21:07:14.735451  521964 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1201 21:07:14.735457  521964 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1201 21:07:14.735464  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1201 21:07:14.735475  521964 command_runner.go:130] > # always happen on a node reboot
	I1201 21:07:14.735773  521964 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1201 21:07:14.735799  521964 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1201 21:07:14.735807  521964 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1201 21:07:14.735813  521964 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1201 21:07:14.735817  521964 command_runner.go:130] > # version_file_persist = ""
	I1201 21:07:14.735825  521964 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1201 21:07:14.735839  521964 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1201 21:07:14.735844  521964 command_runner.go:130] > # internal_wipe = true
	I1201 21:07:14.735852  521964 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1201 21:07:14.735858  521964 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1201 21:07:14.735861  521964 command_runner.go:130] > # internal_repair = true
	I1201 21:07:14.735867  521964 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1201 21:07:14.735873  521964 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1201 21:07:14.735882  521964 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1201 21:07:14.735891  521964 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1201 21:07:14.735901  521964 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1201 21:07:14.735904  521964 command_runner.go:130] > [crio.api]
	I1201 21:07:14.735909  521964 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1201 21:07:14.735916  521964 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1201 21:07:14.735921  521964 command_runner.go:130] > # IP address on which the stream server will listen.
	I1201 21:07:14.735925  521964 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1201 21:07:14.735932  521964 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1201 21:07:14.735946  521964 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1201 21:07:14.735950  521964 command_runner.go:130] > # stream_port = "0"
	I1201 21:07:14.735958  521964 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1201 21:07:14.735962  521964 command_runner.go:130] > # stream_enable_tls = false
	I1201 21:07:14.735968  521964 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1201 21:07:14.735972  521964 command_runner.go:130] > # stream_idle_timeout = ""
	I1201 21:07:14.735981  521964 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1201 21:07:14.735991  521964 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1201 21:07:14.735995  521964 command_runner.go:130] > # stream_tls_cert = ""
	I1201 21:07:14.736001  521964 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1201 21:07:14.736006  521964 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1201 21:07:14.736013  521964 command_runner.go:130] > # stream_tls_key = ""
	I1201 21:07:14.736023  521964 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1201 21:07:14.736030  521964 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1201 21:07:14.736037  521964 command_runner.go:130] > # automatically pick up the changes.
	I1201 21:07:14.736045  521964 command_runner.go:130] > # stream_tls_ca = ""
	I1201 21:07:14.736072  521964 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736077  521964 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1201 21:07:14.736085  521964 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1201 21:07:14.736092  521964 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1201 21:07:14.736099  521964 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1201 21:07:14.736105  521964 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1201 21:07:14.736108  521964 command_runner.go:130] > [crio.runtime]
	I1201 21:07:14.736114  521964 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1201 21:07:14.736119  521964 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1201 21:07:14.736127  521964 command_runner.go:130] > # "nofile=1024:2048"
	I1201 21:07:14.736134  521964 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1201 21:07:14.736138  521964 command_runner.go:130] > # default_ulimits = [
	I1201 21:07:14.736141  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736146  521964 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1201 21:07:14.736150  521964 command_runner.go:130] > # no_pivot = false
	I1201 21:07:14.736162  521964 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1201 21:07:14.736168  521964 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1201 21:07:14.736196  521964 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1201 21:07:14.736202  521964 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1201 21:07:14.736210  521964 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1201 21:07:14.736220  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736223  521964 command_runner.go:130] > # conmon = ""
	I1201 21:07:14.736228  521964 command_runner.go:130] > # Cgroup setting for conmon
	I1201 21:07:14.736235  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1201 21:07:14.736239  521964 command_runner.go:130] > conmon_cgroup = "pod"
	I1201 21:07:14.736257  521964 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1201 21:07:14.736262  521964 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1201 21:07:14.736269  521964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1201 21:07:14.736273  521964 command_runner.go:130] > # conmon_env = [
	I1201 21:07:14.736276  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736281  521964 command_runner.go:130] > # Additional environment variables to set for all the
	I1201 21:07:14.736286  521964 command_runner.go:130] > # containers. These are overridden if set in the
	I1201 21:07:14.736295  521964 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1201 21:07:14.736302  521964 command_runner.go:130] > # default_env = [
	I1201 21:07:14.736308  521964 command_runner.go:130] > # ]
	I1201 21:07:14.736314  521964 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1201 21:07:14.736322  521964 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1201 21:07:14.736328  521964 command_runner.go:130] > # selinux = false
	I1201 21:07:14.736356  521964 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1201 21:07:14.736370  521964 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1201 21:07:14.736375  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736379  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.736388  521964 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1201 21:07:14.736393  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736397  521964 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1201 21:07:14.736406  521964 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1201 21:07:14.736413  521964 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1201 21:07:14.736419  521964 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1201 21:07:14.736425  521964 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1201 21:07:14.736431  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736439  521964 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1201 21:07:14.736445  521964 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1201 21:07:14.736449  521964 command_runner.go:130] > # the cgroup blockio controller.
	I1201 21:07:14.736452  521964 command_runner.go:130] > # blockio_config_file = ""
	I1201 21:07:14.736459  521964 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1201 21:07:14.736463  521964 command_runner.go:130] > # blockio parameters.
	I1201 21:07:14.736467  521964 command_runner.go:130] > # blockio_reload = false
	I1201 21:07:14.736474  521964 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1201 21:07:14.736477  521964 command_runner.go:130] > # irqbalance daemon.
	I1201 21:07:14.736483  521964 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1201 21:07:14.736489  521964 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1201 21:07:14.736496  521964 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1201 21:07:14.736508  521964 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1201 21:07:14.736514  521964 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1201 21:07:14.736523  521964 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1201 21:07:14.736532  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.736536  521964 command_runner.go:130] > # rdt_config_file = ""
	I1201 21:07:14.736545  521964 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1201 21:07:14.736550  521964 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1201 21:07:14.736555  521964 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1201 21:07:14.736560  521964 command_runner.go:130] > # separate_pull_cgroup = ""
	I1201 21:07:14.736569  521964 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1201 21:07:14.736576  521964 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1201 21:07:14.736580  521964 command_runner.go:130] > # will be added.
	I1201 21:07:14.736585  521964 command_runner.go:130] > # default_capabilities = [
	I1201 21:07:14.737078  521964 command_runner.go:130] > # 	"CHOWN",
	I1201 21:07:14.737092  521964 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1201 21:07:14.737096  521964 command_runner.go:130] > # 	"FSETID",
	I1201 21:07:14.737099  521964 command_runner.go:130] > # 	"FOWNER",
	I1201 21:07:14.737102  521964 command_runner.go:130] > # 	"SETGID",
	I1201 21:07:14.737106  521964 command_runner.go:130] > # 	"SETUID",
	I1201 21:07:14.737130  521964 command_runner.go:130] > # 	"SETPCAP",
	I1201 21:07:14.737134  521964 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1201 21:07:14.737138  521964 command_runner.go:130] > # 	"KILL",
	I1201 21:07:14.737144  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737153  521964 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1201 21:07:14.737160  521964 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1201 21:07:14.737165  521964 command_runner.go:130] > # add_inheritable_capabilities = false
	I1201 21:07:14.737171  521964 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1201 21:07:14.737189  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737193  521964 command_runner.go:130] > default_sysctls = [
	I1201 21:07:14.737198  521964 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1201 21:07:14.737200  521964 command_runner.go:130] > ]
	I1201 21:07:14.737205  521964 command_runner.go:130] > # List of devices on the host that a
	I1201 21:07:14.737212  521964 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1201 21:07:14.737215  521964 command_runner.go:130] > # allowed_devices = [
	I1201 21:07:14.737219  521964 command_runner.go:130] > # 	"/dev/fuse",
	I1201 21:07:14.737222  521964 command_runner.go:130] > # 	"/dev/net/tun",
	I1201 21:07:14.737225  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737230  521964 command_runner.go:130] > # List of additional devices. specified as
	I1201 21:07:14.737237  521964 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1201 21:07:14.737243  521964 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1201 21:07:14.737249  521964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1201 21:07:14.737253  521964 command_runner.go:130] > # additional_devices = [
	I1201 21:07:14.737257  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737266  521964 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1201 21:07:14.737271  521964 command_runner.go:130] > # cdi_spec_dirs = [
	I1201 21:07:14.737274  521964 command_runner.go:130] > # 	"/etc/cdi",
	I1201 21:07:14.737277  521964 command_runner.go:130] > # 	"/var/run/cdi",
	I1201 21:07:14.737280  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737286  521964 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1201 21:07:14.737293  521964 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1201 21:07:14.737297  521964 command_runner.go:130] > # Defaults to false.
	I1201 21:07:14.737311  521964 command_runner.go:130] > # device_ownership_from_security_context = false
	I1201 21:07:14.737318  521964 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1201 21:07:14.737324  521964 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1201 21:07:14.737327  521964 command_runner.go:130] > # hooks_dir = [
	I1201 21:07:14.737335  521964 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1201 21:07:14.737338  521964 command_runner.go:130] > # ]
	I1201 21:07:14.737344  521964 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1201 21:07:14.737352  521964 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1201 21:07:14.737357  521964 command_runner.go:130] > # its default mounts from the following two files:
	I1201 21:07:14.737360  521964 command_runner.go:130] > #
	I1201 21:07:14.737366  521964 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1201 21:07:14.737372  521964 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1201 21:07:14.737378  521964 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1201 21:07:14.737380  521964 command_runner.go:130] > #
	I1201 21:07:14.737386  521964 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1201 21:07:14.737393  521964 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1201 21:07:14.737399  521964 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1201 21:07:14.737407  521964 command_runner.go:130] > #      only add mounts it finds in this file.
	I1201 21:07:14.737410  521964 command_runner.go:130] > #
	I1201 21:07:14.737414  521964 command_runner.go:130] > # default_mounts_file = ""
	I1201 21:07:14.737422  521964 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1201 21:07:14.737429  521964 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1201 21:07:14.737433  521964 command_runner.go:130] > # pids_limit = -1
	I1201 21:07:14.737440  521964 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1201 21:07:14.737446  521964 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1201 21:07:14.737452  521964 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1201 21:07:14.737460  521964 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1201 21:07:14.737464  521964 command_runner.go:130] > # log_size_max = -1
	I1201 21:07:14.737472  521964 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1201 21:07:14.737476  521964 command_runner.go:130] > # log_to_journald = false
	I1201 21:07:14.737487  521964 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1201 21:07:14.737492  521964 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1201 21:07:14.737497  521964 command_runner.go:130] > # Path to directory for container attach sockets.
	I1201 21:07:14.737502  521964 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1201 21:07:14.737511  521964 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1201 21:07:14.737516  521964 command_runner.go:130] > # bind_mount_prefix = ""
	I1201 21:07:14.737521  521964 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1201 21:07:14.737528  521964 command_runner.go:130] > # read_only = false
	I1201 21:07:14.737534  521964 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1201 21:07:14.737541  521964 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1201 21:07:14.737545  521964 command_runner.go:130] > # live configuration reload.
	I1201 21:07:14.737549  521964 command_runner.go:130] > # log_level = "info"
	I1201 21:07:14.737557  521964 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1201 21:07:14.737563  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.737567  521964 command_runner.go:130] > # log_filter = ""
	I1201 21:07:14.737573  521964 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737583  521964 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1201 21:07:14.737588  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737596  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737599  521964 command_runner.go:130] > # uid_mappings = ""
	I1201 21:07:14.737606  521964 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1201 21:07:14.737612  521964 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1201 21:07:14.737616  521964 command_runner.go:130] > # separated by comma.
	I1201 21:07:14.737624  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737627  521964 command_runner.go:130] > # gid_mappings = ""
	I1201 21:07:14.737634  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1201 21:07:14.737640  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737646  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737660  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737665  521964 command_runner.go:130] > # minimum_mappable_uid = -1
	I1201 21:07:14.737674  521964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1201 21:07:14.737681  521964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1201 21:07:14.737686  521964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1201 21:07:14.737694  521964 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1201 21:07:14.737937  521964 command_runner.go:130] > # minimum_mappable_gid = -1
	I1201 21:07:14.737957  521964 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1201 21:07:14.737967  521964 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1201 21:07:14.737974  521964 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1201 21:07:14.737980  521964 command_runner.go:130] > # ctr_stop_timeout = 30
	I1201 21:07:14.737998  521964 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1201 21:07:14.738018  521964 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1201 21:07:14.738028  521964 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1201 21:07:14.738033  521964 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1201 21:07:14.738042  521964 command_runner.go:130] > # drop_infra_ctr = true
	I1201 21:07:14.738048  521964 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1201 21:07:14.738058  521964 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1201 21:07:14.738073  521964 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1201 21:07:14.738082  521964 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1201 21:07:14.738090  521964 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1201 21:07:14.738099  521964 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1201 21:07:14.738106  521964 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1201 21:07:14.738116  521964 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1201 21:07:14.738120  521964 command_runner.go:130] > # shared_cpuset = ""
	I1201 21:07:14.738130  521964 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1201 21:07:14.738139  521964 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1201 21:07:14.738154  521964 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1201 21:07:14.738162  521964 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1201 21:07:14.738167  521964 command_runner.go:130] > # pinns_path = ""
	I1201 21:07:14.738173  521964 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1201 21:07:14.738182  521964 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1201 21:07:14.738191  521964 command_runner.go:130] > # enable_criu_support = true
	I1201 21:07:14.738197  521964 command_runner.go:130] > # Enable/disable the generation of the container,
	I1201 21:07:14.738206  521964 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1201 21:07:14.738221  521964 command_runner.go:130] > # enable_pod_events = false
	I1201 21:07:14.738232  521964 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1201 21:07:14.738238  521964 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1201 21:07:14.738242  521964 command_runner.go:130] > # default_runtime = "crun"
	I1201 21:07:14.738251  521964 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1201 21:07:14.738259  521964 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1201 21:07:14.738269  521964 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1201 21:07:14.738278  521964 command_runner.go:130] > # creation as a file is not desired either.
	I1201 21:07:14.738287  521964 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1201 21:07:14.738304  521964 command_runner.go:130] > # the hostname is being managed dynamically.
	I1201 21:07:14.738322  521964 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1201 21:07:14.738329  521964 command_runner.go:130] > # ]
	I1201 21:07:14.738336  521964 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1201 21:07:14.738347  521964 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1201 21:07:14.738353  521964 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1201 21:07:14.738358  521964 command_runner.go:130] > # Each entry in the table should follow the format:
	I1201 21:07:14.738365  521964 command_runner.go:130] > #
	I1201 21:07:14.738381  521964 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1201 21:07:14.738387  521964 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1201 21:07:14.738394  521964 command_runner.go:130] > # runtime_type = "oci"
	I1201 21:07:14.738400  521964 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1201 21:07:14.738408  521964 command_runner.go:130] > # inherit_default_runtime = false
	I1201 21:07:14.738414  521964 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1201 21:07:14.738421  521964 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1201 21:07:14.738426  521964 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1201 21:07:14.738434  521964 command_runner.go:130] > # monitor_env = []
	I1201 21:07:14.738439  521964 command_runner.go:130] > # privileged_without_host_devices = false
	I1201 21:07:14.738449  521964 command_runner.go:130] > # allowed_annotations = []
	I1201 21:07:14.738459  521964 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1201 21:07:14.738463  521964 command_runner.go:130] > # no_sync_log = false
	I1201 21:07:14.738469  521964 command_runner.go:130] > # default_annotations = {}
	I1201 21:07:14.738473  521964 command_runner.go:130] > # stream_websockets = false
	I1201 21:07:14.738481  521964 command_runner.go:130] > # seccomp_profile = ""
	I1201 21:07:14.738515  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.738533  521964 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1201 21:07:14.738539  521964 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1201 21:07:14.738546  521964 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1201 21:07:14.738556  521964 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1201 21:07:14.738560  521964 command_runner.go:130] > #   in $PATH.
	I1201 21:07:14.738572  521964 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1201 21:07:14.738581  521964 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1201 21:07:14.738587  521964 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1201 21:07:14.738601  521964 command_runner.go:130] > #   state.
	I1201 21:07:14.738612  521964 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1201 21:07:14.738623  521964 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1201 21:07:14.738629  521964 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1201 21:07:14.738641  521964 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1201 21:07:14.738648  521964 command_runner.go:130] > #   the values from the default runtime on load time.
	I1201 21:07:14.738658  521964 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1201 21:07:14.738675  521964 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1201 21:07:14.738686  521964 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1201 21:07:14.738697  521964 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1201 21:07:14.738706  521964 command_runner.go:130] > #   The currently recognized values are:
	I1201 21:07:14.738713  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1201 21:07:14.738722  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1201 21:07:14.738731  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1201 21:07:14.738737  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1201 21:07:14.738751  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1201 21:07:14.738762  521964 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1201 21:07:14.738774  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1201 21:07:14.738785  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1201 21:07:14.738795  521964 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1201 21:07:14.738801  521964 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1201 21:07:14.738814  521964 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1201 21:07:14.738830  521964 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1201 21:07:14.738841  521964 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1201 21:07:14.738847  521964 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1201 21:07:14.738857  521964 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1201 21:07:14.738871  521964 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1201 21:07:14.738878  521964 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1201 21:07:14.738885  521964 command_runner.go:130] > #   deprecated option "conmon".
	I1201 21:07:14.738904  521964 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1201 21:07:14.738913  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1201 21:07:14.738921  521964 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1201 21:07:14.738930  521964 command_runner.go:130] > #   should be moved to the container's cgroup
	I1201 21:07:14.738937  521964 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1201 21:07:14.738949  521964 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1201 21:07:14.738961  521964 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1201 21:07:14.738974  521964 command_runner.go:130] > #   conmon-rs by using:
	I1201 21:07:14.738982  521964 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1201 21:07:14.738996  521964 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1201 21:07:14.739008  521964 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1201 21:07:14.739024  521964 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1201 21:07:14.739033  521964 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1201 21:07:14.739040  521964 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1201 21:07:14.739057  521964 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1201 21:07:14.739067  521964 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1201 21:07:14.739077  521964 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1201 21:07:14.739089  521964 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1201 21:07:14.739097  521964 command_runner.go:130] > #   when a machine crash happens.
	I1201 21:07:14.739105  521964 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1201 21:07:14.739117  521964 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1201 21:07:14.739152  521964 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1201 21:07:14.739158  521964 command_runner.go:130] > #   seccomp profile for the runtime.
	I1201 21:07:14.739165  521964 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1201 21:07:14.739172  521964 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1201 21:07:14.739175  521964 command_runner.go:130] > #
	I1201 21:07:14.739179  521964 command_runner.go:130] > # Using the seccomp notifier feature:
	I1201 21:07:14.739182  521964 command_runner.go:130] > #
	I1201 21:07:14.739188  521964 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1201 21:07:14.739195  521964 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1201 21:07:14.739204  521964 command_runner.go:130] > #
	I1201 21:07:14.739211  521964 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1201 21:07:14.739217  521964 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1201 21:07:14.739220  521964 command_runner.go:130] > #
	I1201 21:07:14.739225  521964 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1201 21:07:14.739228  521964 command_runner.go:130] > # feature.
	I1201 21:07:14.739231  521964 command_runner.go:130] > #
	I1201 21:07:14.739237  521964 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1201 21:07:14.739247  521964 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1201 21:07:14.739257  521964 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1201 21:07:14.739263  521964 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1201 21:07:14.739270  521964 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1201 21:07:14.739281  521964 command_runner.go:130] > #
	I1201 21:07:14.739288  521964 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1201 21:07:14.739293  521964 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1201 21:07:14.739296  521964 command_runner.go:130] > #
	I1201 21:07:14.739302  521964 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1201 21:07:14.739308  521964 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1201 21:07:14.739310  521964 command_runner.go:130] > #
	I1201 21:07:14.739316  521964 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1201 21:07:14.739322  521964 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1201 21:07:14.739325  521964 command_runner.go:130] > # limitation.
	I1201 21:07:14.739329  521964 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1201 21:07:14.739334  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1201 21:07:14.739337  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739341  521964 command_runner.go:130] > runtime_root = "/run/crun"
	I1201 21:07:14.739345  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739356  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739360  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739365  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739369  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739373  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739380  521964 command_runner.go:130] > allowed_annotations = [
	I1201 21:07:14.739384  521964 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1201 21:07:14.739391  521964 command_runner.go:130] > ]
	I1201 21:07:14.739396  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739400  521964 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1201 21:07:14.739409  521964 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1201 21:07:14.739413  521964 command_runner.go:130] > runtime_type = ""
	I1201 21:07:14.739420  521964 command_runner.go:130] > runtime_root = "/run/runc"
	I1201 21:07:14.739434  521964 command_runner.go:130] > inherit_default_runtime = false
	I1201 21:07:14.739442  521964 command_runner.go:130] > runtime_config_path = ""
	I1201 21:07:14.739450  521964 command_runner.go:130] > container_min_memory = ""
	I1201 21:07:14.739455  521964 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1201 21:07:14.739459  521964 command_runner.go:130] > monitor_cgroup = "pod"
	I1201 21:07:14.739465  521964 command_runner.go:130] > monitor_exec_cgroup = ""
	I1201 21:07:14.739470  521964 command_runner.go:130] > privileged_without_host_devices = false
	I1201 21:07:14.739481  521964 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1201 21:07:14.739490  521964 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1201 21:07:14.739507  521964 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1201 21:07:14.739519  521964 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1201 21:07:14.739534  521964 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1201 21:07:14.739546  521964 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1201 21:07:14.739559  521964 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1201 21:07:14.739569  521964 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1201 21:07:14.739589  521964 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1201 21:07:14.739601  521964 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1201 21:07:14.739616  521964 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1201 21:07:14.739627  521964 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1201 21:07:14.739635  521964 command_runner.go:130] > # Example:
	I1201 21:07:14.739639  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1201 21:07:14.739652  521964 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1201 21:07:14.739663  521964 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1201 21:07:14.739669  521964 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1201 21:07:14.739672  521964 command_runner.go:130] > # cpuset = "0-1"
	I1201 21:07:14.739681  521964 command_runner.go:130] > # cpushares = "5"
	I1201 21:07:14.739685  521964 command_runner.go:130] > # cpuquota = "1000"
	I1201 21:07:14.739694  521964 command_runner.go:130] > # cpuperiod = "100000"
	I1201 21:07:14.739698  521964 command_runner.go:130] > # cpulimit = "35"
	I1201 21:07:14.739705  521964 command_runner.go:130] > # Where:
	I1201 21:07:14.739709  521964 command_runner.go:130] > # The workload name is workload-type.
	I1201 21:07:14.739716  521964 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1201 21:07:14.739728  521964 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1201 21:07:14.739739  521964 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1201 21:07:14.739752  521964 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1201 21:07:14.739762  521964 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1201 21:07:14.739768  521964 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1201 21:07:14.739778  521964 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1201 21:07:14.739786  521964 command_runner.go:130] > # Default value is set to true
	I1201 21:07:14.739791  521964 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1201 21:07:14.739803  521964 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1201 21:07:14.739813  521964 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1201 21:07:14.739818  521964 command_runner.go:130] > # Default value is set to 'false'
	I1201 21:07:14.739822  521964 command_runner.go:130] > # disable_hostport_mapping = false
	I1201 21:07:14.739830  521964 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1201 21:07:14.739839  521964 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1201 21:07:14.739846  521964 command_runner.go:130] > # timezone = ""
	I1201 21:07:14.739853  521964 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1201 21:07:14.739859  521964 command_runner.go:130] > #
	I1201 21:07:14.739866  521964 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1201 21:07:14.739884  521964 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1201 21:07:14.739892  521964 command_runner.go:130] > [crio.image]
	I1201 21:07:14.739898  521964 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1201 21:07:14.739903  521964 command_runner.go:130] > # default_transport = "docker://"
	I1201 21:07:14.739913  521964 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1201 21:07:14.739919  521964 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739926  521964 command_runner.go:130] > # global_auth_file = ""
	I1201 21:07:14.739931  521964 command_runner.go:130] > # The image used to instantiate infra containers.
	I1201 21:07:14.739940  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739952  521964 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1201 21:07:14.739964  521964 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1201 21:07:14.739973  521964 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1201 21:07:14.739979  521964 command_runner.go:130] > # This option supports live configuration reload.
	I1201 21:07:14.739986  521964 command_runner.go:130] > # pause_image_auth_file = ""
	I1201 21:07:14.739993  521964 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1201 21:07:14.740002  521964 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1201 21:07:14.740009  521964 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1201 21:07:14.740029  521964 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1201 21:07:14.740037  521964 command_runner.go:130] > # pause_command = "/pause"
	I1201 21:07:14.740044  521964 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1201 21:07:14.740053  521964 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1201 21:07:14.740060  521964 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1201 21:07:14.740070  521964 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1201 21:07:14.740076  521964 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1201 21:07:14.740086  521964 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1201 21:07:14.740091  521964 command_runner.go:130] > # pinned_images = [
	I1201 21:07:14.740093  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740110  521964 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1201 21:07:14.740121  521964 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1201 21:07:14.740133  521964 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1201 21:07:14.740143  521964 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1201 21:07:14.740153  521964 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1201 21:07:14.740158  521964 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1201 21:07:14.740167  521964 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1201 21:07:14.740181  521964 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1201 21:07:14.740204  521964 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1201 21:07:14.740215  521964 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1201 21:07:14.740226  521964 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1201 21:07:14.740236  521964 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1201 21:07:14.740243  521964 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1201 21:07:14.740259  521964 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1201 21:07:14.740263  521964 command_runner.go:130] > # changing them here.
	I1201 21:07:14.740273  521964 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1201 21:07:14.740278  521964 command_runner.go:130] > # insecure_registries = [
	I1201 21:07:14.740285  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740293  521964 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1201 21:07:14.740302  521964 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1201 21:07:14.740306  521964 command_runner.go:130] > # image_volumes = "mkdir"
	I1201 21:07:14.740316  521964 command_runner.go:130] > # Temporary directory to use for storing big files
	I1201 21:07:14.740321  521964 command_runner.go:130] > # big_files_temporary_dir = ""
	I1201 21:07:14.740340  521964 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1201 21:07:14.740349  521964 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1201 21:07:14.740358  521964 command_runner.go:130] > # auto_reload_registries = false
	I1201 21:07:14.740364  521964 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1201 21:07:14.740376  521964 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1201 21:07:14.740387  521964 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1201 21:07:14.740391  521964 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1201 21:07:14.740399  521964 command_runner.go:130] > # The mode of short name resolution.
	I1201 21:07:14.740415  521964 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1201 21:07:14.740423  521964 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1201 21:07:14.740428  521964 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1201 21:07:14.740436  521964 command_runner.go:130] > # short_name_mode = "enforcing"
	I1201 21:07:14.740443  521964 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1201 21:07:14.740453  521964 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1201 21:07:14.740462  521964 command_runner.go:130] > # oci_artifact_mount_support = true
	I1201 21:07:14.740469  521964 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1201 21:07:14.740484  521964 command_runner.go:130] > # CNI plugins.
	I1201 21:07:14.740492  521964 command_runner.go:130] > [crio.network]
	I1201 21:07:14.740498  521964 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1201 21:07:14.740504  521964 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1201 21:07:14.740512  521964 command_runner.go:130] > # cni_default_network = ""
	I1201 21:07:14.740519  521964 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1201 21:07:14.740530  521964 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1201 21:07:14.740540  521964 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1201 21:07:14.740549  521964 command_runner.go:130] > # plugin_dirs = [
	I1201 21:07:14.740562  521964 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1201 21:07:14.740566  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740576  521964 command_runner.go:130] > # List of included pod metrics.
	I1201 21:07:14.740580  521964 command_runner.go:130] > # included_pod_metrics = [
	I1201 21:07:14.740583  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740588  521964 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1201 21:07:14.740596  521964 command_runner.go:130] > [crio.metrics]
	I1201 21:07:14.740602  521964 command_runner.go:130] > # Globally enable or disable metrics support.
	I1201 21:07:14.740614  521964 command_runner.go:130] > # enable_metrics = false
	I1201 21:07:14.740622  521964 command_runner.go:130] > # Specify enabled metrics collectors.
	I1201 21:07:14.740637  521964 command_runner.go:130] > # Per default all metrics are enabled.
	I1201 21:07:14.740644  521964 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1201 21:07:14.740655  521964 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1201 21:07:14.740662  521964 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1201 21:07:14.740666  521964 command_runner.go:130] > # metrics_collectors = [
	I1201 21:07:14.740674  521964 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1201 21:07:14.740680  521964 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1201 21:07:14.740688  521964 command_runner.go:130] > # 	"containers_oom_total",
	I1201 21:07:14.740692  521964 command_runner.go:130] > # 	"processes_defunct",
	I1201 21:07:14.740706  521964 command_runner.go:130] > # 	"operations_total",
	I1201 21:07:14.740714  521964 command_runner.go:130] > # 	"operations_latency_seconds",
	I1201 21:07:14.740719  521964 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1201 21:07:14.740727  521964 command_runner.go:130] > # 	"operations_errors_total",
	I1201 21:07:14.740731  521964 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1201 21:07:14.740736  521964 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1201 21:07:14.740740  521964 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1201 21:07:14.740748  521964 command_runner.go:130] > # 	"image_pulls_success_total",
	I1201 21:07:14.740753  521964 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1201 21:07:14.740761  521964 command_runner.go:130] > # 	"containers_oom_count_total",
	I1201 21:07:14.740766  521964 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1201 21:07:14.740780  521964 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1201 21:07:14.740789  521964 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1201 21:07:14.740792  521964 command_runner.go:130] > # ]
	I1201 21:07:14.740803  521964 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1201 21:07:14.740807  521964 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1201 21:07:14.740812  521964 command_runner.go:130] > # The port on which the metrics server will listen.
	I1201 21:07:14.740816  521964 command_runner.go:130] > # metrics_port = 9090
	I1201 21:07:14.740825  521964 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1201 21:07:14.740829  521964 command_runner.go:130] > # metrics_socket = ""
	I1201 21:07:14.740839  521964 command_runner.go:130] > # The certificate for the secure metrics server.
	I1201 21:07:14.740846  521964 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1201 21:07:14.740867  521964 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1201 21:07:14.740879  521964 command_runner.go:130] > # certificate on any modification event.
	I1201 21:07:14.740883  521964 command_runner.go:130] > # metrics_cert = ""
	I1201 21:07:14.740888  521964 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1201 21:07:14.740897  521964 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1201 21:07:14.740901  521964 command_runner.go:130] > # metrics_key = ""
	I1201 21:07:14.740912  521964 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1201 21:07:14.740916  521964 command_runner.go:130] > [crio.tracing]
	I1201 21:07:14.740933  521964 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1201 21:07:14.740941  521964 command_runner.go:130] > # enable_tracing = false
	I1201 21:07:14.740946  521964 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1201 21:07:14.740959  521964 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1201 21:07:14.740966  521964 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1201 21:07:14.740970  521964 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1201 21:07:14.740975  521964 command_runner.go:130] > # CRI-O NRI configuration.
	I1201 21:07:14.740982  521964 command_runner.go:130] > [crio.nri]
	I1201 21:07:14.740987  521964 command_runner.go:130] > # Globally enable or disable NRI.
	I1201 21:07:14.740993  521964 command_runner.go:130] > # enable_nri = true
	I1201 21:07:14.741004  521964 command_runner.go:130] > # NRI socket to listen on.
	I1201 21:07:14.741013  521964 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1201 21:07:14.741018  521964 command_runner.go:130] > # NRI plugin directory to use.
	I1201 21:07:14.741026  521964 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1201 21:07:14.741031  521964 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1201 21:07:14.741039  521964 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1201 21:07:14.741046  521964 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1201 21:07:14.741111  521964 command_runner.go:130] > # nri_disable_connections = false
	I1201 21:07:14.741122  521964 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1201 21:07:14.741131  521964 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1201 21:07:14.741137  521964 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1201 21:07:14.741142  521964 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1201 21:07:14.741156  521964 command_runner.go:130] > # NRI default validator configuration.
	I1201 21:07:14.741167  521964 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1201 21:07:14.741178  521964 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1201 21:07:14.741190  521964 command_runner.go:130] > # can be restricted/rejected:
	I1201 21:07:14.741198  521964 command_runner.go:130] > # - OCI hook injection
	I1201 21:07:14.741206  521964 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1201 21:07:14.741214  521964 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1201 21:07:14.741218  521964 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1201 21:07:14.741229  521964 command_runner.go:130] > # - adjustment of linux namespaces
	I1201 21:07:14.741241  521964 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1201 21:07:14.741252  521964 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1201 21:07:14.741262  521964 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1201 21:07:14.741268  521964 command_runner.go:130] > #
	I1201 21:07:14.741276  521964 command_runner.go:130] > # [crio.nri.default_validator]
	I1201 21:07:14.741281  521964 command_runner.go:130] > # nri_enable_default_validator = false
	I1201 21:07:14.741290  521964 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1201 21:07:14.741295  521964 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1201 21:07:14.741308  521964 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1201 21:07:14.741318  521964 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1201 21:07:14.741323  521964 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1201 21:07:14.741331  521964 command_runner.go:130] > # nri_validator_required_plugins = [
	I1201 21:07:14.741334  521964 command_runner.go:130] > # ]
	I1201 21:07:14.741344  521964 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1201 21:07:14.741350  521964 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1201 21:07:14.741357  521964 command_runner.go:130] > [crio.stats]
	I1201 21:07:14.741364  521964 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1201 21:07:14.741379  521964 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1201 21:07:14.741384  521964 command_runner.go:130] > # stats_collection_period = 0
	I1201 21:07:14.741390  521964 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1201 21:07:14.741400  521964 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1201 21:07:14.741409  521964 command_runner.go:130] > # collection_period = 0
	I1201 21:07:14.743695  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701489723Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1201 21:07:14.743741  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.701919228Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1201 21:07:14.743753  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702192379Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1201 21:07:14.743761  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.70239116Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1201 21:07:14.743770  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.702743464Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:07:14.743783  521964 command_runner.go:130] ! time="2025-12-01T21:07:14.703251326Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1201 21:07:14.743797  521964 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1201 21:07:14.743882  521964 cni.go:84] Creating CNI manager for ""
	I1201 21:07:14.743892  521964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:07:14.743907  521964 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:07:14.743929  521964 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:07:14.744055  521964 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:07:14.744124  521964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:07:14.751405  521964 command_runner.go:130] > kubeadm
	I1201 21:07:14.751425  521964 command_runner.go:130] > kubectl
	I1201 21:07:14.751429  521964 command_runner.go:130] > kubelet
	I1201 21:07:14.752384  521964 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:07:14.752448  521964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:07:14.760026  521964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:07:14.773137  521964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:07:14.786891  521964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1201 21:07:14.799994  521964 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:07:14.803501  521964 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1201 21:07:14.803615  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:14.920306  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:15.405274  521964 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:07:15.405300  521964 certs.go:195] generating shared ca certs ...
	I1201 21:07:15.405343  521964 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:15.405542  521964 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:07:15.405589  521964 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:07:15.405597  521964 certs.go:257] generating profile certs ...
	I1201 21:07:15.405726  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:07:15.405806  521964 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:07:15.405849  521964 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:07:15.405858  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1201 21:07:15.405870  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1201 21:07:15.405880  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1201 21:07:15.405895  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1201 21:07:15.405908  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1201 21:07:15.405920  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1201 21:07:15.405931  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1201 21:07:15.405941  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1201 21:07:15.406006  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:07:15.406049  521964 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:07:15.406068  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:07:15.406113  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:07:15.406137  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:07:15.406172  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:07:15.406237  521964 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:07:15.406287  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem -> /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.406308  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.406325  521964 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.407085  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:07:15.435325  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:07:15.460453  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:07:15.484820  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:07:15.503541  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:07:15.522001  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:07:15.540074  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:07:15.557935  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:07:15.576709  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:07:15.595484  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:07:15.614431  521964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:07:15.632609  521964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:07:15.645463  521964 ssh_runner.go:195] Run: openssl version
	I1201 21:07:15.651732  521964 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1201 21:07:15.652120  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:07:15.660522  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664099  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664137  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.664196  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:07:15.704899  521964 command_runner.go:130] > 51391683
	I1201 21:07:15.705348  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:07:15.713374  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:07:15.721756  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725563  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725613  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.725662  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:07:15.766341  521964 command_runner.go:130] > 3ec20f2e
	I1201 21:07:15.766756  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:07:15.774531  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:07:15.784868  521964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788871  521964 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788929  521964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.788991  521964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:07:15.829962  521964 command_runner.go:130] > b5213941
	I1201 21:07:15.830101  521964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:07:15.838399  521964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842255  521964 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:07:15.842282  521964 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1201 21:07:15.842289  521964 command_runner.go:130] > Device: 259,1	Inode: 2345358     Links: 1
	I1201 21:07:15.842296  521964 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1201 21:07:15.842308  521964 command_runner.go:130] > Access: 2025-12-01 21:03:07.261790641 +0000
	I1201 21:07:15.842313  521964 command_runner.go:130] > Modify: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842318  521964 command_runner.go:130] > Change: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842324  521964 command_runner.go:130] >  Birth: 2025-12-01 20:59:03.599058650 +0000
	I1201 21:07:15.842405  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:07:15.883885  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.884377  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:07:15.925029  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.925488  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:07:15.967363  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:15.967505  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:07:16.008933  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.009470  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:07:16.052395  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.052881  521964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:07:16.094441  521964 command_runner.go:130] > Certificate will not expire
	I1201 21:07:16.094868  521964 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:07:16.094970  521964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:07:16.095033  521964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:07:16.122671  521964 cri.go:89] found id: ""
	I1201 21:07:16.122745  521964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:07:16.129629  521964 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1201 21:07:16.129704  521964 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1201 21:07:16.129749  521964 command_runner.go:130] > /var/lib/minikube/etcd:
	I1201 21:07:16.130618  521964 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:07:16.130634  521964 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:07:16.130700  521964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:07:16.138263  521964 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:07:16.138690  521964 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-198694" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.138796  521964 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-482752/kubeconfig needs updating (will repair): [kubeconfig missing "functional-198694" cluster setting kubeconfig missing "functional-198694" context setting]
	I1201 21:07:16.139097  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.139560  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.139697  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.140229  521964 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 21:07:16.140256  521964 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 21:07:16.140265  521964 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 21:07:16.140270  521964 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 21:07:16.140285  521964 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 21:07:16.140581  521964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:07:16.140673  521964 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1201 21:07:16.148484  521964 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1201 21:07:16.148518  521964 kubeadm.go:602] duration metric: took 17.877938ms to restartPrimaryControlPlane
	I1201 21:07:16.148528  521964 kubeadm.go:403] duration metric: took 53.667619ms to StartCluster
	I1201 21:07:16.148545  521964 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.148604  521964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.149244  521964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:07:16.149450  521964 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 21:07:16.149837  521964 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:07:16.149887  521964 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 21:07:16.149959  521964 addons.go:70] Setting storage-provisioner=true in profile "functional-198694"
	I1201 21:07:16.149971  521964 addons.go:239] Setting addon storage-provisioner=true in "functional-198694"
	I1201 21:07:16.149997  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.150469  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.150813  521964 addons.go:70] Setting default-storageclass=true in profile "functional-198694"
	I1201 21:07:16.150847  521964 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-198694"
	I1201 21:07:16.151095  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.157800  521964 out.go:179] * Verifying Kubernetes components...
	I1201 21:07:16.160495  521964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:07:16.191854  521964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 21:07:16.194709  521964 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.194728  521964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 21:07:16.194804  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.200857  521964 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:07:16.201020  521964 kapi.go:59] client config for functional-198694: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 21:07:16.201620  521964 addons.go:239] Setting addon default-storageclass=true in "functional-198694"
	I1201 21:07:16.201664  521964 host.go:66] Checking if "functional-198694" exists ...
	I1201 21:07:16.202447  521964 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:07:16.245603  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.261120  521964 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:16.261144  521964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 21:07:16.261216  521964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:07:16.294119  521964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:07:16.373164  521964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:07:16.408855  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:16.445769  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.156317  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156488  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156559  521964 retry.go:31] will retry after 323.483538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156628  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.156659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156673  521964 retry.go:31] will retry after 132.387182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.156540  521964 node_ready.go:35] waiting up to 6m0s for node "functional-198694" to be "Ready" ...
	I1201 21:07:17.156859  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.156951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.289607  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.345927  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.349389  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.349423  521964 retry.go:31] will retry after 369.598465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.480797  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.537300  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.541071  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.541105  521964 retry.go:31] will retry after 250.665906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.657414  521964 type.go:168] "Request Body" body=""
	I1201 21:07:17.657490  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:17.657803  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:17.720223  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:17.783305  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.783341  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.783362  521964 retry.go:31] will retry after 375.003536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.792548  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:17.854946  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:17.854989  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:17.855009  521964 retry.go:31] will retry after 643.882626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.157670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.158003  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:18.159267  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:18.225579  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.225683  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.225726  521964 retry.go:31] will retry after 1.172405999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.500161  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:18.566908  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:18.566958  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.566979  521964 retry.go:31] will retry after 1.221518169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:18.657190  521964 type.go:168] "Request Body" body=""
	I1201 21:07:18.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:18.657601  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.157332  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.157408  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.157736  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:19.157807  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:19.398291  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:19.478299  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.478401  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.478424  521964 retry.go:31] will retry after 725.636222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:07:19.657755  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:19.658075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:19.789414  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:19.847191  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:19.847229  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:19.847250  521964 retry.go:31] will retry after 688.680113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.157514  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.157586  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.157835  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:20.205210  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:20.265409  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.265448  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.265467  521964 retry.go:31] will retry after 1.46538703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.536913  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:20.597058  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:20.597109  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.597130  521964 retry.go:31] will retry after 1.65793185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:20.657434  521964 type.go:168] "Request Body" body=""
	I1201 21:07:20.657509  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:20.657856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.157726  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.157805  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.158133  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:21.158204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:21.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:21.657048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:21.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:21.731621  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:21.794486  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:21.794526  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:21.794546  521964 retry.go:31] will retry after 2.907930062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:22.255851  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:22.319449  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:22.319491  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.319511  521964 retry.go:31] will retry after 2.874628227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:22.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:07:22.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:22.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.157294  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:23.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:07:23.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:23.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:23.657472  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:24.157139  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.157221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.157543  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.657245  521964 type.go:168] "Request Body" body=""
	I1201 21:07:24.657316  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:24.657622  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:24.702795  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:24.765996  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:24.766044  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:24.766064  521964 retry.go:31] will retry after 4.286350529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.157658  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.157735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.158024  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:25.194368  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:25.250297  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:25.253946  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.253992  521964 retry.go:31] will retry after 4.844090269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:25.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:07:25.657643  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:25.657986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:25.658042  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:26.157893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.157964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.158227  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:26.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:07:26.657225  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:26.657521  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:27.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:27.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:27.657272  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:28.156970  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.157420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:28.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:28.657148  521964 type.go:168] "Request Body" body=""
	I1201 21:07:28.657244  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:28.657592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.053156  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:29.109834  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:29.112973  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.113004  521964 retry.go:31] will retry after 7.544668628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:29.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.157507  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:29.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:07:29.657043  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:29.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:30.099244  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:30.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.157941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.158210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:30.158254  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:30.164980  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:30.165032  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.165052  521964 retry.go:31] will retry after 3.932491359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:30.657621  521964 type.go:168] "Request Body" body=""
	I1201 21:07:30.657701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:30.657964  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.157809  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:31.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:07:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:31.657377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.157020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:32.656981  521964 type.go:168] "Request Body" body=""
	I1201 21:07:32.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:32.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:32.657449  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:33.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:33.657102  521964 type.go:168] "Request Body" body=""
	I1201 21:07:33.657175  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.097811  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:34.156372  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:34.156417  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.156437  521964 retry.go:31] will retry after 10.974576666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:34.157589  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.157652  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.157912  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:34.657701  521964 type.go:168] "Request Body" body=""
	I1201 21:07:34.657780  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:34.658097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:34.658164  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:35.157826  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.157905  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.158165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:35.656910  521964 type.go:168] "Request Body" body=""
	I1201 21:07:35.656988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:35.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.157319  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.157409  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.657573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:36.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:36.657912  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:36.658034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:36.730483  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:36.730533  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:36.730554  521964 retry.go:31] will retry after 6.063500375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:37.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.157097  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:37.157505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:37.657206  521964 type.go:168] "Request Body" body=""
	I1201 21:07:37.657296  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:37.657631  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.157704  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.157772  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.158095  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:38.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:07:38.657966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:38.658289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.156875  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.156971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.157322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:39.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:07:39.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:39.657329  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:39.657378  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:40.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:40.657085  521964 type.go:168] "Request Body" body=""
	I1201 21:07:40.657161  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:40.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.157198  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.157267  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:41.657708  521964 type.go:168] "Request Body" body=""
	I1201 21:07:41.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:41.658115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:41.658168  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:42.157124  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.157211  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.157646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:07:42.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:42.657398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:42.794843  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:42.853617  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:42.853659  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:42.853680  521964 retry.go:31] will retry after 14.65335173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:43.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:43.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:07:43.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:43.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:44.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.157343  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:44.157384  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:44.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:44.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:44.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.131211  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:45.157806  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.157891  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:45.221334  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:45.221384  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.221409  521964 retry.go:31] will retry after 11.551495399s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:45.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:07:45.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:45.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:46.157214  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.157292  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.157581  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:46.157642  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:46.657575  521964 type.go:168] "Request Body" body=""
	I1201 21:07:46.657647  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:46.657977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.157285  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.157350  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.157647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:47.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:07:47.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:47.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.156986  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:48.657048  521964 type.go:168] "Request Body" body=""
	I1201 21:07:48.657118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:48.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:48.657502  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:49.157019  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.157102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.157404  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:49.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:07:49.657208  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:49.657513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.156941  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.157013  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.157268  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:50.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:07:50.657077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:50.657401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:51.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:51.157620  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:51.657409  521964 type.go:168] "Request Body" body=""
	I1201 21:07:51.657480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:51.657812  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.157619  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.157701  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.158034  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:52.657819  521964 type.go:168] "Request Body" body=""
	I1201 21:07:52.657897  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:52.658222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:53.157452  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.157532  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.157789  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:53.157829  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:53.657659  521964 type.go:168] "Request Body" body=""
	I1201 21:07:53.657737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:53.658067  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.157887  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.157963  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.158311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:54.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:07:54.656941  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:54.657207  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.156998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:55.656937  521964 type.go:168] "Request Body" body=""
	I1201 21:07:55.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:55.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:55.657445  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:56.157203  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.157283  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.157556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.657510  521964 type.go:168] "Request Body" body=""
	I1201 21:07:56.657589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:56.657925  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:56.773160  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:07:56.828599  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:56.831983  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:56.832017  521964 retry.go:31] will retry after 19.593958555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.157556  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.157632  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.157962  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:57.507290  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:07:57.561691  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:07:57.565020  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.565054  521964 retry.go:31] will retry after 13.393925675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:07:57.657318  521964 type.go:168] "Request Body" body=""
	I1201 21:07:57.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:57.657711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:57.657760  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:07:58.157573  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.157646  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.157951  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:58.657731  521964 type.go:168] "Request Body" body=""
	I1201 21:07:58.657806  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:58.658143  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.157844  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.158113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:07:59.657909  521964 type.go:168] "Request Body" body=""
	I1201 21:07:59.657992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:07:59.658327  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:07:59.658388  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:00.157067  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.157155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:00.656909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:00.656981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:00.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:01.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:08:01.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:01.657427  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:02.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.157192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.157450  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:02.157491  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:02.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:02.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:02.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.156950  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:03.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:08:03.657724  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:03.658043  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:04.157851  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.157926  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:04.158353  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:04.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:04.656984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:04.657308  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.159258  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.159335  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.159644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:05.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:05.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:05.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.157419  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.157493  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.157828  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:06.657668  521964 type.go:168] "Request Body" body=""
	I1201 21:08:06.657743  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:06.658026  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:06.658074  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:07.157783  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.157860  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.158171  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:07.656931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:07.657012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:07.657345  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.157032  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.157106  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.157464  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:08.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:08:08.657254  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:08.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:09.157296  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.157697  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:09.157750  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:09.657059  521964 type.go:168] "Request Body" body=""
	I1201 21:08:09.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:09.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.156962  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.157037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.157365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.656967  521964 type.go:168] "Request Body" body=""
	I1201 21:08:10.657051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:10.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:10.960044  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:11.016321  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:11.019785  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.019824  521964 retry.go:31] will retry after 44.695855679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:11.156928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.157027  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.157315  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:11.657003  521964 type.go:168] "Request Body" body=""
	I1201 21:08:11.657075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:11.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:11.657463  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:12.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.157770  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:12.657058  521964 type.go:168] "Request Body" body=""
	I1201 21:08:12.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:12.657388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:13.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:08:13.657169  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:13.657467  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:13.657512  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:14.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.157012  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:14.657025  521964 type.go:168] "Request Body" body=""
	I1201 21:08:14.657098  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:14.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.157163  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.157273  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:15.657300  521964 type.go:168] "Request Body" body=""
	I1201 21:08:15.657393  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:15.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:15.657762  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:16.157618  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.158073  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:16.426568  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:16.504541  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:16.504580  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.504599  521964 retry.go:31] will retry after 41.569353087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1201 21:08:16.657931  521964 type.go:168] "Request Body" body=""
	I1201 21:08:16.658002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:16.658310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.156879  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.156968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.157222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:17.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:08:17.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:17.657405  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:18.157142  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.157229  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.157610  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:18.157665  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:18.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:08:18.657865  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:18.658174  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.156967  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.157284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:19.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:08:19.657096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:19.657452  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.156910  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:20.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:20.657076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:20.657458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:20.657526  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:21.157000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:21.656883  521964 type.go:168] "Request Body" body=""
	I1201 21:08:21.656968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:21.657320  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.157049  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.157135  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.157505  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:22.657283  521964 type.go:168] "Request Body" body=""
	I1201 21:08:22.657387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:22.657820  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:22.657893  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:23.157642  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.157715  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.157983  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:23.657627  521964 type.go:168] "Request Body" body=""
	I1201 21:08:23.657716  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:23.658152  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.157033  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.157478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:24.657185  521964 type.go:168] "Request Body" body=""
	I1201 21:08:24.657275  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:24.657653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:25.157527  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.157631  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.158006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:25.158072  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:25.657861  521964 type.go:168] "Request Body" body=""
	I1201 21:08:25.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:25.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.157315  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.157387  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.157664  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:26.657761  521964 type.go:168] "Request Body" body=""
	I1201 21:08:26.657845  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:26.658250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:27.657204  521964 type.go:168] "Request Body" body=""
	I1201 21:08:27.657277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:27.657573  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:27.657627  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:28.157001  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.157095  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.157476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:28.657072  521964 type.go:168] "Request Body" body=""
	I1201 21:08:28.657162  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:28.657537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.157417  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.157501  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.157799  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:29.657718  521964 type.go:168] "Request Body" body=""
	I1201 21:08:29.657811  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:29.658220  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:29.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:30.156978  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.157057  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:30.656889  521964 type.go:168] "Request Body" body=""
	I1201 21:08:30.656971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:30.657275  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.157026  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.157118  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:31.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:08:31.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:31.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:32.157753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.157835  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.158232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:32.158291  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:32.657000  521964 type.go:168] "Request Body" body=""
	I1201 21:08:32.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:32.657475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.157220  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.157305  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.157692  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:33.657487  521964 type.go:168] "Request Body" body=""
	I1201 21:08:33.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:33.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.157729  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.157800  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:34.656912  521964 type.go:168] "Request Body" body=""
	I1201 21:08:34.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:34.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:34.657482  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:35.157152  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.157241  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.157546  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:35.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:08:35.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:35.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.157282  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.157367  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.157727  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:36.657599  521964 type.go:168] "Request Body" body=""
	I1201 21:08:36.657686  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:36.657988  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:36.658045  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:37.157802  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.157896  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.158276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:37.657031  521964 type.go:168] "Request Body" body=""
	I1201 21:08:37.657119  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:37.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.157766  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.157842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.158130  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:38.657916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:38.657997  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:38.658359  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:38.658421  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:39.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:39.657230  521964 type.go:168] "Request Body" body=""
	I1201 21:08:39.657317  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:39.657685  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.157525  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.157997  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:40.657880  521964 type.go:168] "Request Body" body=""
	I1201 21:08:40.657968  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:40.658348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:41.156945  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.157382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:41.157447  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:41.657680  521964 type.go:168] "Request Body" body=""
	I1201 21:08:41.657767  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:41.658134  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.157525  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:42.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:42.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:42.657312  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:43.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.157479  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:43.157548  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:43.657235  521964 type.go:168] "Request Body" body=""
	I1201 21:08:43.657325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:43.657683  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.157581  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.158002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:44.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:08:44.657915  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:44.658331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:45.157080  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:45.157719  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:45.656935  521964 type.go:168] "Request Body" body=""
	I1201 21:08:45.657016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:45.657311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.157385  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.157475  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.157855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:46.657753  521964 type.go:168] "Request Body" body=""
	I1201 21:08:46.657842  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:46.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:47.157536  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.157614  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.157944  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:47.157998  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:47.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:08:47.657826  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:47.658196  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.157876  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.157958  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.158348  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:48.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:08:48.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:48.657375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:49.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:08:49.657287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:49.657715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:49.657793  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:50.157561  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.157644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.157981  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:50.657775  521964 type.go:168] "Request Body" body=""
	I1201 21:08:50.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:50.658229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.156948  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:51.656916  521964 type.go:168] "Request Body" body=""
	I1201 21:08:51.656999  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:51.657330  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:52.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.157094  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.157485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:52.157551  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:52.657260  521964 type.go:168] "Request Body" body=""
	I1201 21:08:52.657345  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:52.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.157505  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.157589  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.157948  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:53.657814  521964 type.go:168] "Request Body" body=""
	I1201 21:08:53.657901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:53.658274  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.157033  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.157120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.157494  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:54.657829  521964 type.go:168] "Request Body" body=""
	I1201 21:08:54.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:54.658226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:54.658280  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:55.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.157064  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.657040  521964 type.go:168] "Request Body" body=""
	I1201 21:08:55.657127  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:55.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:55.716783  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1201 21:08:55.791498  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795332  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:55.795559  521964 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:56.157158  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.157619  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:56.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:56.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:56.658038  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:57.157909  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.157989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.158351  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:57.158413  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:08:57.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:08:57.656992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:57.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.074174  521964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 21:08:58.149106  521964 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149168  521964 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1201 21:08:58.149265  521964 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1201 21:08:58.152649  521964 out.go:179] * Enabled addons: 
	I1201 21:08:58.156383  521964 addons.go:530] duration metric: took 1m42.00648536s for enable addons: enabled=[]
	I1201 21:08:58.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.157352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.157737  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:58.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:58.657670  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:58.658025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.157338  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.157435  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:08:59.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:08:59.657679  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:08:59.658051  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:08:59.658126  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:00.157924  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.158055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.158429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:00.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:00.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:00.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.157113  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.157519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:01.657045  521964 type.go:168] "Request Body" body=""
	I1201 21:09:01.657134  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:01.657523  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:02.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.157730  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:02.157812  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:02.657697  521964 type.go:168] "Request Body" body=""
	I1201 21:09:02.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:02.658264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.157016  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.157506  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:03.656940  521964 type.go:168] "Request Body" body=""
	I1201 21:09:03.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:03.657317  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.157198  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.157621  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:04.657376  521964 type.go:168] "Request Body" body=""
	I1201 21:09:04.657464  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:04.657841  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:04.657911  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:05.157626  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.157704  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.158028  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:05.657928  521964 type.go:168] "Request Body" body=""
	I1201 21:09:05.658022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:05.658411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.157283  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.157384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.157756  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:06.657421  521964 type.go:168] "Request Body" body=""
	I1201 21:09:06.657507  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:06.657800  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:07.157695  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.157786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.158194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:07.158265  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:07.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:09:07.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:07.657425  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.157836  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.158191  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:08.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:09:08.657104  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:08.657486  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.157089  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:09.657023  521964 type.go:168] "Request Body" body=""
	I1201 21:09:09.657120  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:09.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:09.657606  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:10.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.157086  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.157484  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:10.657232  521964 type.go:168] "Request Body" body=""
	I1201 21:09:10.657327  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:10.657688  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.157620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.157927  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:11.656987  521964 type.go:168] "Request Body" body=""
	I1201 21:09:11.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:11.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:12.157102  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.157196  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:12.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:12.657123  521964 type.go:168] "Request Body" body=""
	I1201 21:09:12.657203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:12.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.157438  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:13.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:09:13.657049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:13.657420  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:14.157820  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.157917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.158213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:14.158267  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:14.657065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:14.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.157262  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.157373  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.157794  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:15.657581  521964 type.go:168] "Request Body" body=""
	I1201 21:09:15.657709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:15.658011  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.157709  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:16.657457  521964 type.go:168] "Request Body" body=""
	I1201 21:09:16.657635  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:16.658136  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:16.658210  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:17.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.157017  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.157412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:17.657169  521964 type.go:168] "Request Body" body=""
	I1201 21:09:17.657255  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:17.657728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.157890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.158292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:18.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:09:18.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:18.657390  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:19.157017  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.157103  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.157518  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:19.157588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:19.657290  521964 type.go:168] "Request Body" body=""
	I1201 21:09:19.657384  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:19.657811  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.157631  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.157730  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.158033  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:20.657806  521964 type.go:168] "Request Body" body=""
	I1201 21:09:20.657889  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:20.658276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.156985  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.157070  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.157465  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:21.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:09:21.657003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:21.657335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:21.657390  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:22.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.157477  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:22.657014  521964 type.go:168] "Request Body" body=""
	I1201 21:09:22.657111  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:22.657539  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.157195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:23.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:09:23.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:23.657519  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:23.657588  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:24.157112  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.157201  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.157599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:24.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:09:24.657392  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:24.657673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.156980  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.157061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:25.657225  521964 type.go:168] "Request Body" body=""
	I1201 21:09:25.657322  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:25.657718  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:25.657784  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:26.157490  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.157896  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:26.657062  521964 type.go:168] "Request Body" body=""
	I1201 21:09:26.657152  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:26.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.157009  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.157105  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.157490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:27.656936  521964 type.go:168] "Request Body" body=""
	I1201 21:09:27.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:27.657384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:28.157006  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.157101  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.157533  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:28.157613  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:28.657356  521964 type.go:168] "Request Body" body=""
	I1201 21:09:28.657444  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:28.657855  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.157632  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.157718  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.158017  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:29.657847  521964 type.go:168] "Request Body" body=""
	I1201 21:09:29.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:29.658379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:30.157140  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.157673  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:30.157765  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:30.657527  521964 type.go:168] "Request Body" body=""
	I1201 21:09:30.657629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:30.657947  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.157843  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.157942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.158394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:31.657184  521964 type.go:168] "Request Body" body=""
	I1201 21:09:31.657271  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:31.657662  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:32.157380  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.157463  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.157761  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:32.157813  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:32.657593  521964 type.go:168] "Request Body" body=""
	I1201 21:09:32.657683  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:32.658044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.157900  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.157992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.158384  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:33.656918  521964 type.go:168] "Request Body" body=""
	I1201 21:09:33.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:33.657277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.156983  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.157434  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:34.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:09:34.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:34.657395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:34.657466  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:35.157073  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.157156  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.157471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:35.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:09:35.657088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:35.657485  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.157396  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.157480  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.157836  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:09:36.657834  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:36.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:36.658171  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:37.156863  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.156942  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.157295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:37.657055  521964 type.go:168] "Request Body" body=""
	I1201 21:09:37.657144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:37.657495  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.156908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.156981  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.157238  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:09:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:38.657402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:39.157119  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.157202  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.157574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:39.157635  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:39.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:09:39.656951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:39.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.156899  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.157303  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:40.656905  521964 type.go:168] "Request Body" body=""
	I1201 21:09:40.656985  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:40.657322  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:41.157534  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.157609  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:41.157915  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:41.657857  521964 type.go:168] "Request Body" body=""
	I1201 21:09:41.657938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:41.658297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.157048  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.157140  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.157537  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:42.657274  521964 type.go:168] "Request Body" body=""
	I1201 21:09:42.657353  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:42.657634  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.157360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:43.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:09:43.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:43.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:43.657439  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:44.157645  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.157713  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.157985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:44.657826  521964 type.go:168] "Request Body" body=""
	I1201 21:09:44.657923  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:44.658392  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.157027  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.157125  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.157611  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:45.656842  521964 type.go:168] "Request Body" body=""
	I1201 21:09:45.656917  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:45.657187  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:46.157288  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.157362  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.157699  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:46.157757  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:46.657562  521964 type.go:168] "Request Body" body=""
	I1201 21:09:46.657642  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:46.658013  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.157757  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.158112  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:47.657894  521964 type.go:168] "Request Body" body=""
	I1201 21:09:47.657972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:47.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.157083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.157458  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:48.657587  521964 type.go:168] "Request Body" body=""
	I1201 21:09:48.657654  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:48.657937  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:48.657979  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:49.157706  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.157785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.158140  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:49.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:09:49.657921  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:49.658333  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.156929  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.157000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.157277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:50.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:09:50.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:50.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:51.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.157528  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:51.157583  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:51.656908  521964 type.go:168] "Request Body" body=""
	I1201 21:09:51.656978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:51.657247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.157355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:52.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:09:52.657082  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:52.657488  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.157030  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.157430  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:53.656984  521964 type.go:168] "Request Body" body=""
	I1201 21:09:53.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:53.657399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:53.657456  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:54.156969  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:54.657665  521964 type.go:168] "Request Body" body=""
	I1201 21:09:54.657741  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:54.658010  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.158212  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:55.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:09:55.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:55.657364  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:56.157167  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.157246  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.157570  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:56.157631  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:56.657418  521964 type.go:168] "Request Body" body=""
	I1201 21:09:56.657498  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:56.657830  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.157641  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.157734  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.158097  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:57.657841  521964 type.go:168] "Request Body" body=""
	I1201 21:09:57.657910  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:57.658189  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.156868  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.156944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.157264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:58.656995  521964 type.go:168] "Request Body" body=""
	I1201 21:09:58.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:58.657454  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:09:58.657513  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:09:59.157748  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.157815  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.158119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:09:59.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:09:59.656934  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:09:59.657255  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.182510  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.182611  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.182943  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:00.657771  521964 type.go:168] "Request Body" body=""
	I1201 21:10:00.657850  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:00.658154  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:00.658206  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:01.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.156992  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:01.657214  521964 type.go:168] "Request Body" body=""
	I1201 21:10:01.657298  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:01.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.157865  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.157946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.158249  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:02.656955  521964 type.go:168] "Request Body" body=""
	I1201 21:10:02.657029  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:03.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.157411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:03.157464  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:03.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:10:03.657085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:03.657453  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.157381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:04.657145  521964 type.go:168] "Request Body" body=""
	I1201 21:10:04.657224  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:04.657551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:05.159263  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.159342  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.159636  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:05.159683  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:05.656989  521964 type.go:168] "Request Body" body=""
	I1201 21:10:05.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:05.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.157539  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.157637  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.158058  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:06.657526  521964 type.go:168] "Request Body" body=""
	I1201 21:10:06.657604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:06.657867  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.157646  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.157727  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.158042  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:07.657854  521964 type.go:168] "Request Body" body=""
	I1201 21:10:07.657935  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:07.658292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:07.658351  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:08.157603  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.157674  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.157973  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:08.657777  521964 type.go:168] "Request Body" body=""
	I1201 21:10:08.657862  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:08.658197  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.156973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.157298  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:09.656870  521964 type.go:168] "Request Body" body=""
	I1201 21:10:09.656947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:09.657210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:10.156995  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.157076  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.157429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:10.157492  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:10.657080  521964 type.go:168] "Request Body" body=""
	I1201 21:10:10.657192  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:10.657646  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.157228  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.157607  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:11.657517  521964 type.go:168] "Request Body" body=""
	I1201 21:10:11.657597  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:11.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:12.157792  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.157864  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:12.158240  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:12.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:10:12.656959  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:12.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.157415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:13.657121  521964 type.go:168] "Request Body" body=""
	I1201 21:10:13.657199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:13.657550  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:14.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.157913  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.158250  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:14.158314  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:14.656980  521964 type.go:168] "Request Body" body=""
	I1201 21:10:14.657062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:14.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.157065  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.157428  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:15.656915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:15.656989  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:15.657251  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.157705  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:16.657618  521964 type.go:168] "Request Body" body=""
	I1201 21:10:16.657700  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:16.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:16.658091  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:17.157765  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.157836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:17.657888  521964 type.go:168] "Request Body" body=""
	I1201 21:10:17.657971  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:17.658355  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.157092  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:18.657112  521964 type.go:168] "Request Body" body=""
	I1201 21:10:18.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:18.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:19.156976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.157055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.157401  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:19.157452  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:19.657118  521964 type.go:168] "Request Body" body=""
	I1201 21:10:19.657191  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:19.657516  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.157379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:20.656945  521964 type.go:168] "Request Body" body=""
	I1201 21:10:20.657020  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:20.657391  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:21.157095  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.157176  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.157552  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:21.157608  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:21.657312  521964 type.go:168] "Request Body" body=""
	I1201 21:10:21.657400  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:21.657677  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.156963  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.157039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:22.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:10:22.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:22.657368  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.156906  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.157247  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:23.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:10:23.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:23.657411  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:23.657467  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:24.157128  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.157203  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:24.657808  521964 type.go:168] "Request Body" body=""
	I1201 21:10:24.657883  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:24.658178  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.156896  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.156988  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.157349  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:25.657068  521964 type.go:168] "Request Body" body=""
	I1201 21:10:25.657155  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:25.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:25.657581  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:26.157344  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.157430  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.157711  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:26.657676  521964 type.go:168] "Request Body" body=""
	I1201 21:10:26.657747  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:26.658068  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.157849  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.157936  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.158262  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:27.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:10:27.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:27.657287  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:28.156897  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.156978  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.157356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:28.157423  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:28.656939  521964 type.go:168] "Request Body" body=""
	I1201 21:10:28.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:28.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.157277  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.157661  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:29.657507  521964 type.go:168] "Request Body" body=""
	I1201 21:10:29.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:29.657974  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:30.157860  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.157951  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.158382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:30.158453  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:30.656922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:30.656991  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:30.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.156994  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.157077  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:31.657398  521964 type.go:168] "Request Body" body=""
	I1201 21:10:31.657481  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:31.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.157533  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.157604  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.157880  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:32.657746  521964 type.go:168] "Request Body" body=""
	I1201 21:10:32.657828  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:32.658176  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:32.658229  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:33.156931  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.157018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:33.657643  521964 type.go:168] "Request Body" body=""
	I1201 21:10:33.657710  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:33.658006  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.157807  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.157894  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.158278  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:34.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:10:34.657059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:34.657396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:35.157082  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.157199  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:35.157521  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:35.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:10:35.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:35.657353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.157368  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.157452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.157808  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:36.657277  521964 type.go:168] "Request Body" body=""
	I1201 21:10:36.657352  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:36.657623  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.156972  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.157053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.157410  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:37.656998  521964 type.go:168] "Request Body" body=""
	I1201 21:10:37.657079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:37.657415  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:37.657471  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:38.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.157242  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:38.656962  521964 type.go:168] "Request Body" body=""
	I1201 21:10:38.657036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:38.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.157041  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.157378  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:39.657653  521964 type.go:168] "Request Body" body=""
	I1201 21:10:39.657723  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:39.657992  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:39.658033  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:40.157791  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.157881  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.158267  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:40.656973  521964 type.go:168] "Request Body" body=""
	I1201 21:10:40.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:40.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.157040  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.157114  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.157371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:41.657289  521964 type.go:168] "Request Body" body=""
	I1201 21:10:41.657371  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:41.657729  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:42.157592  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.157681  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.158115  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:42.158193  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:42.657466  521964 type.go:168] "Request Body" body=""
	I1201 21:10:42.657542  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:42.657815  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.157576  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.157658  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.158000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:43.657674  521964 type.go:168] "Request Body" body=""
	I1201 21:10:43.657745  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:43.658086  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.157304  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.157391  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.157723  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:44.657534  521964 type.go:168] "Request Body" body=""
	I1201 21:10:44.657625  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:44.657958  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:44.658013  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:45.157841  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.157928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.158336  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:45.657663  521964 type.go:168] "Request Body" body=""
	I1201 21:10:45.657751  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:45.658031  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.157548  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.157629  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.157950  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:46.657877  521964 type.go:168] "Request Body" body=""
	I1201 21:10:46.657952  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:46.658291  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:46.658347  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:47.156857  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.156933  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.157198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:47.656938  521964 type.go:168] "Request Body" body=""
	I1201 21:10:47.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:47.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.157015  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.157423  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:48.657541  521964 type.go:168] "Request Body" body=""
	I1201 21:10:48.657618  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:48.657936  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:49.157607  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.157694  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.158025  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:49.158076  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:49.657812  521964 type.go:168] "Request Body" body=""
	I1201 21:10:49.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:49.658194  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.157521  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.157593  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.157864  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:50.657707  521964 type.go:168] "Request Body" body=""
	I1201 21:10:50.657786  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:50.658124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:51.157805  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.157886  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:51.158279  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:51.657127  521964 type.go:168] "Request Body" body=""
	I1201 21:10:51.657207  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:51.657471  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.156922  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.157004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.157305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:52.656968  521964 type.go:168] "Request Body" body=""
	I1201 21:10:52.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:52.657379  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.156947  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.157022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.157288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:53.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:10:53.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:53.657360  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:53.657416  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:54.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.157189  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:10:54.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:54.657260  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.157007  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.157093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.157520  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:55.657242  521964 type.go:168] "Request Body" body=""
	I1201 21:10:55.657323  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:55.657660  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:55.657717  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:56.157590  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.157668  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.157942  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:56.657918  521964 type.go:168] "Request Body" body=""
	I1201 21:10:56.657994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:56.658356  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.156965  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.157377  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:57.657638  521964 type.go:168] "Request Body" body=""
	I1201 21:10:57.657712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:57.657982  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:10:57.658023  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:10:58.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.158147  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:58.656879  521964 type.go:168] "Request Body" body=""
	I1201 21:10:58.656954  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:58.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.156979  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.157246  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:10:59.656979  521964 type.go:168] "Request Body" body=""
	I1201 21:10:59.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:10:59.657429  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:00.157201  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.157287  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:00.157684  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:00.657844  521964 type.go:168] "Request Body" body=""
	I1201 21:11:00.657912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:00.658231  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.157048  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.157426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:01.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:11:01.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:01.657407  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.156872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.156950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.157232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:02.656970  521964 type.go:168] "Request Body" body=""
	I1201 21:11:02.657044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:02.657347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:02.657392  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:03.157114  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:03.656873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:03.656949  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:03.657257  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.157079  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:04.657086  521964 type.go:168] "Request Body" body=""
	I1201 21:11:04.657170  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:04.657515  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:04.657568  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:05.157777  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.157855  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.158116  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:05.657893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:05.657976  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:05.658256  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.157228  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.157325  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.157672  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:06.657576  521964 type.go:168] "Request Body" body=""
	I1201 21:11:06.657644  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:06.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:06.657957  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:07.157699  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.157770  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.158064  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:07.657781  521964 type.go:168] "Request Body" body=""
	I1201 21:11:07.657859  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:07.658224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.157367  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.157437  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:08.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:11:08.657592  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:08.657968  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:08.658028  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:09.157829  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.157911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.158288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:09.656917  521964 type.go:168] "Request Body" body=""
	I1201 21:11:09.656990  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:09.657288  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.156991  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.157073  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:10.657170  521964 type.go:168] "Request Body" body=""
	I1201 21:11:10.657248  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:10.657599  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:11.156833  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.156912  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.157200  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:11.157249  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:11.656972  521964 type.go:168] "Request Body" body=""
	I1201 21:11:11.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:11.657556  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.157243  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.157318  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.157669  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:12.657823  521964 type.go:168] "Request Body" body=""
	I1201 21:11:12.657911  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:12.658208  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:13.156933  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.157369  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:13.157434  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:13.657105  521964 type.go:168] "Request Body" body=""
	I1201 21:11:13.657190  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:13.657535  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.157809  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.157875  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.158149  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:14.657913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:14.658000  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:14.658340  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:15.156989  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.157075  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:15.157479  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:15.656928  521964 type.go:168] "Request Body" body=""
	I1201 21:11:15.657004  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:15.657310  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.157234  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.157328  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:16.657344  521964 type.go:168] "Request Body" body=""
	I1201 21:11:16.657439  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:16.657980  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:17.157136  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.157223  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.157592  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:17.157646  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:17.657532  521964 type.go:168] "Request Body" body=""
	I1201 21:11:17.657620  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:17.657985  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.157793  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.157869  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.158224  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:18.657332  521964 type.go:168] "Request Body" body=""
	I1201 21:11:18.657414  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:18.657739  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:19.157633  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.157712  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.158075  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:19.158138  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:19.656860  521964 type.go:168] "Request Body" body=""
	I1201 21:11:19.656944  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:19.657367  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.157129  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.157251  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.157538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:20.656988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:20.657069  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:20.657403  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.157154  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.157653  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:21.657489  521964 type.go:168] "Request Body" body=""
	I1201 21:11:21.657579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:21.657887  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:21.657951  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:22.157730  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.157807  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.158188  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:22.656943  521964 type.go:168] "Request Body" body=""
	I1201 21:11:22.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:22.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.157143  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.157413  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:23.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:23.657067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:23.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:24.157147  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.157227  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:24.157604  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:24.657818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:24.657890  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:24.658165  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:25.657192  521964 type.go:168] "Request Body" body=""
	I1201 21:11:25.657269  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:25.657598  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:26.157266  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.157339  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.157618  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:26.157661  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:26.657561  521964 type.go:168] "Request Body" body=""
	I1201 21:11:26.657639  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:26.658002  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.157818  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.157901  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.158277  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:27.657008  521964 type.go:168] "Request Body" body=""
	I1201 21:11:27.657074  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:27.657338  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.157024  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.157108  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.157462  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:28.657032  521964 type.go:168] "Request Body" body=""
	I1201 21:11:28.657112  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:28.657442  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:28.657505  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:29.157808  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.157877  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.158164  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:29.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:11:29.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:29.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.157157  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.157249  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.157650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:30.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:11:30.657451  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:30.657748  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:30.657794  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:31.157596  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.157692  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.158099  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:31.657089  521964 type.go:168] "Request Body" body=""
	I1201 21:11:31.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:31.657530  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.156920  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.157283  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:32.656965  521964 type.go:168] "Request Body" body=""
	I1201 21:11:32.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:32.657400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:33.157120  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:33.157650  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:33.656925  521964 type.go:168] "Request Body" body=""
	I1201 21:11:33.657005  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:33.657282  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.156999  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.157085  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.157508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:34.657236  521964 type.go:168] "Request Body" body=""
	I1201 21:11:34.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:34.657650  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.156987  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.157331  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:35.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:11:35.657055  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:35.657385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:35.657436  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:36.157278  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.157365  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.157713  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:36.657755  521964 type.go:168] "Request Body" body=""
	I1201 21:11:36.657874  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:36.658213  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.156873  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.156946  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.157318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:37.656921  521964 type.go:168] "Request Body" body=""
	I1201 21:11:37.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:37.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:38.157094  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.157172  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.157449  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:38.157537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:38.656976  521964 type.go:168] "Request Body" body=""
	I1201 21:11:38.657054  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:38.657414  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.157117  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.157193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.157513  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:39.656888  521964 type.go:168] "Request Body" body=""
	I1201 21:11:39.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:39.657266  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.156958  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.157031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.157358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:40.657069  521964 type.go:168] "Request Body" body=""
	I1201 21:11:40.657148  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:40.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:40.657538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:41.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.156983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.157301  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:41.657216  521964 type.go:168] "Request Body" body=""
	I1201 21:11:41.657295  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:41.657644  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.157003  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.157088  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.157475  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:42.657872  521964 type.go:168] "Request Body" body=""
	I1201 21:11:42.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:42.658284  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:42.658338  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:43.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.157034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.157374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:43.657103  521964 type.go:168] "Request Body" body=""
	I1201 21:11:43.657182  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:43.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.156866  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.156937  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.157219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:11:44.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:44.657376  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:45.157037  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.157482  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:45.157545  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:45.657188  521964 type.go:168] "Request Body" body=""
	I1201 21:11:45.657259  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:45.657524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.157054  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.157131  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:46.656990  521964 type.go:168] "Request Body" body=""
	I1201 21:11:46.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:46.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.157109  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.157180  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.157448  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:47.656964  521964 type.go:168] "Request Body" body=""
	I1201 21:11:47.657093  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:47.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:47.657462  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:48.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:48.657126  521964 type.go:168] "Request Body" body=""
	I1201 21:11:48.657197  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:48.657487  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.156981  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:11:49.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:49.657346  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:50.156927  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.157010  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.157276  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:50.157327  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:50.657022  521964 type.go:168] "Request Body" body=""
	I1201 21:11:50.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:50.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.157409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:51.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:51.656998  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:51.657362  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:52.156975  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.157051  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.157406  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:52.157465  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:52.657158  521964 type.go:168] "Request Body" body=""
	I1201 21:11:52.657238  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:52.657574  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.156907  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.156984  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.157259  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:53.656971  521964 type.go:168] "Request Body" body=""
	I1201 21:11:53.657042  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:53.657409  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.156987  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.157066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.157400  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:54.656919  521964 type.go:168] "Request Body" body=""
	I1201 21:11:54.656994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:54.657292  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:54.657346  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:55.156967  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.157050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.157385  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:55.656978  521964 type.go:168] "Request Body" body=""
	I1201 21:11:55.657053  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:55.657357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.157331  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.157412  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.157693  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:56.657721  521964 type.go:168] "Request Body" body=""
	I1201 21:11:56.657797  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:56.658158  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:56.658204  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:57.156915  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.157002  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:57.657664  521964 type.go:168] "Request Body" body=""
	I1201 21:11:57.657735  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:57.658000  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.157786  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.157861  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.158295  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:58.657007  521964 type.go:168] "Request Body" body=""
	I1201 21:11:58.657100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:58.657480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:11:59.157749  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.157823  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.158141  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:11:59.158186  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:11:59.656847  521964 type.go:168] "Request Body" body=""
	I1201 21:11:59.656927  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:11:59.657290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.156952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.157062  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.157388  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:00.657065  521964 type.go:168] "Request Body" body=""
	I1201 21:12:00.657141  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:00.657419  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.156993  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.157080  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.157418  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:01.657372  521964 type.go:168] "Request Body" body=""
	I1201 21:12:01.657452  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:01.657807  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:01.657861  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:02.156990  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.157067  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.157446  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:02.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:02.657050  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:02.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.157097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.157177  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.157545  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:03.657864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:03.657940  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:03.658290  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:03.658354  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:04.157043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.157122  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.157481  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:04.657071  521964 type.go:168] "Request Body" body=""
	I1201 21:12:04.657150  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:04.657508  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.157762  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.157829  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.158111  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:05.657870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:05.658003  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:05.658357  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:05.658411  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:06.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.157261  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.157642  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:06.657501  521964 type.go:168] "Request Body" body=""
	I1201 21:12:06.657577  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:06.657845  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.157682  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.157766  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.158185  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:07.656894  521964 type.go:168] "Request Body" body=""
	I1201 21:12:07.656972  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:07.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:08.157028  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.157109  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.157394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:08.157437  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:08.656996  521964 type.go:168] "Request Body" body=""
	I1201 21:12:08.657072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:08.657417  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.157160  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.157245  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:09.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:09.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:09.657243  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.156932  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.157016  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:10.657029  521964 type.go:168] "Request Body" body=""
	I1201 21:12:10.657110  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:10.657478  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:10.657537  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:11.157239  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.157313  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.157609  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:11.657334  521964 type.go:168] "Request Body" body=""
	I1201 21:12:11.657410  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:11.657733  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.157529  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.157603  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.157977  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:12.657303  521964 type.go:168] "Request Body" body=""
	I1201 21:12:12.657379  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:12.657647  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:12.657692  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:13.156979  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.157445  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:13.657161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:13.657236  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:13.657560  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.157233  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.157309  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.157583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:14.656977  521964 type.go:168] "Request Body" body=""
	I1201 21:12:14.657061  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:14.657408  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:15.157135  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.157216  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.157563  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:15.157629  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:15.657856  521964 type.go:168] "Request Body" body=""
	I1201 21:12:15.657928  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:15.658198  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.157210  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.157294  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.157627  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:16.657499  521964 type.go:168] "Request Body" body=""
	I1201 21:12:16.657580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:16.657918  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:17.157664  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.157737  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.158007  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:17.158051  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:17.657817  521964 type.go:168] "Request Body" body=""
	I1201 21:12:17.657893  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:17.658321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.157126  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.157218  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.157616  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:18.657309  521964 type.go:168] "Request Body" body=""
	I1201 21:12:18.657377  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:18.657641  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.157459  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.157533  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.157874  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:19.657700  521964 type.go:168] "Request Body" body=""
	I1201 21:12:19.657774  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:19.658113  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:19.658170  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:20.157420  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.157499  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.157831  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:20.657717  521964 type.go:168] "Request Body" body=""
	I1201 21:12:20.657790  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:20.658137  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.156870  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.156955  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.157335  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:21.656896  521964 type.go:168] "Request Body" body=""
	I1201 21:12:21.656973  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:21.657240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:22.156959  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.157337  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:22.157382  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:22.656961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:22.657035  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:22.657334  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.156895  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.156974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.157240  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:23.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:23.657018  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:23.657321  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:24.156930  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.157030  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.157353  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:24.157404  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:24.657661  521964 type.go:168] "Request Body" body=""
	I1201 21:12:24.657744  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:24.658139  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.156898  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.157058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.157380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:25.657004  521964 type.go:168] "Request Body" body=""
	I1201 21:12:25.657102  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:25.657473  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:26.157364  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.157445  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.157715  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:26.157767  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:26.657747  521964 type.go:168] "Request Body" body=""
	I1201 21:12:26.657820  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:26.658119  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.157901  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.157983  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.158328  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:27.656880  521964 type.go:168] "Request Body" body=""
	I1201 21:12:27.656966  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:27.657232  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.156968  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.157046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.157396  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:28.657122  521964 type.go:168] "Request Body" body=""
	I1201 21:12:28.657193  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:28.657567  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:28.657618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:29.157156  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.157234  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.157509  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:29.656952  521964 type.go:168] "Request Body" body=""
	I1201 21:12:29.657026  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:29.657361  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.156982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.157060  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.157416  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:30.657692  521964 type.go:168] "Request Body" body=""
	I1201 21:12:30.657762  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:30.658041  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:30.658082  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:31.157866  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.157947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.158324  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:31.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:31.657046  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:31.657381  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.157066  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.157144  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.157399  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:32.656944  521964 type.go:168] "Request Body" body=""
	I1201 21:12:32.657015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:32.657365  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:33.156964  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.157045  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.157424  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:33.157484  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:33.657133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:33.657209  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:33.657460  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.156966  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.157049  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.157398  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:34.657117  521964 type.go:168] "Request Body" body=""
	I1201 21:12:34.657200  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:34.657538  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:35.157873  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.158226  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:35.158268  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:35.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:35.657022  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:35.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.157253  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.157329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.157665  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:36.657154  521964 type.go:168] "Request Body" body=""
	I1201 21:12:36.657221  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:36.657490  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.157161  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.157235  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.157578  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:37.657162  521964 type.go:168] "Request Body" body=""
	I1201 21:12:37.657242  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:37.657583  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:37.657637  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:38.156913  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.156993  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.157311  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:38.656975  521964 type.go:168] "Request Body" body=""
	I1201 21:12:38.657056  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:38.657412  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.157181  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.157541  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:39.657246  521964 type.go:168] "Request Body" body=""
	I1201 21:12:39.657329  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:39.657614  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:40.157008  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.157402  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:40.157459  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:40.656974  521964 type.go:168] "Request Body" body=""
	I1201 21:12:40.657058  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:40.657389  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.156917  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.157011  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.157297  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:41.656997  521964 type.go:168] "Request Body" body=""
	I1201 21:12:41.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:41.657499  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:42.157169  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.157262  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.157666  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:42.157723  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:42.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:12:42.656961  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:42.657222  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.156956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.157047  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.157347  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:43.657015  521964 type.go:168] "Request Body" body=""
	I1201 21:12:43.657087  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:43.657366  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.156909  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.156982  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.157261  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:44.656982  521964 type.go:168] "Request Body" body=""
	I1201 21:12:44.657068  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:44.657431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:44.657488  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:45.157013  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.157096  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.157431  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:45.657107  521964 type.go:168] "Request Body" body=""
	I1201 21:12:45.657195  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:45.657476  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.157495  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.157580  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.157930  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:46.656884  521964 type.go:168] "Request Body" body=""
	I1201 21:12:46.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:46.657318  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:47.157023  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.157100  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.157421  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:47.157476  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:47.656956  521964 type.go:168] "Request Body" body=""
	I1201 21:12:47.657031  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:47.657374  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.156953  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.157032  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.157373  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:48.656942  521964 type.go:168] "Request Body" body=""
	I1201 21:12:48.657023  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:48.657325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:49.157039  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.157121  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.157480  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:49.157538  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:49.656960  521964 type.go:168] "Request Body" body=""
	I1201 21:12:49.657039  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:49.657352  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.156889  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.156960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.157229  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:50.656950  521964 type.go:168] "Request Body" body=""
	I1201 21:12:50.657037  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:50.657397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:51.157121  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.157204  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.157551  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:51.157618  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:51.657566  521964 type.go:168] "Request Body" body=""
	I1201 21:12:51.657641  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:51.657931  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.157799  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.157888  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.158264  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:52.656986  521964 type.go:168] "Request Body" body=""
	I1201 21:12:52.657083  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:52.657426  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:53.157683  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.157769  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.158044  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:53.158097  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:53.657845  521964 type.go:168] "Request Body" body=""
	I1201 21:12:53.657932  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:53.658305  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.156954  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.157044  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:54.656951  521964 type.go:168] "Request Body" body=""
	I1201 21:12:54.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:54.657370  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.157133  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.157212  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.157580  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:55.657319  521964 type.go:168] "Request Body" body=""
	I1201 21:12:55.657404  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:55.657768  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:55.657823  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:56.157456  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.157537  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.157827  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:56.657750  521964 type.go:168] "Request Body" body=""
	I1201 21:12:56.657836  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:56.658210  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.156961  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.157036  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.157395  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:57.657097  521964 type.go:168] "Request Body" body=""
	I1201 21:12:57.657174  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:57.657457  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:58.156992  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.157072  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.157466  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:12:58.157532  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:12:58.657043  521964 type.go:168] "Request Body" body=""
	I1201 21:12:58.657124  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:58.657483  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.156864  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.156938  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.157199  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:12:59.656900  521964 type.go:168] "Request Body" body=""
	I1201 21:12:59.656974  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:12:59.657286  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:00.157057  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.157147  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.157511  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:00.157569  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:00.657428  521964 type.go:168] "Request Body" body=""
	I1201 21:13:00.657504  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:00.657796  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.157663  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.157764  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.158124  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:01.656992  521964 type.go:168] "Request Body" body=""
	I1201 21:13:01.657066  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:01.657380  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:02.157714  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.157793  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.158080  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:02.158125  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:02.657871  521964 type.go:168] "Request Body" body=""
	I1201 21:13:02.657947  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:02.658316  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.156973  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.157059  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.157502  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:03.657012  521964 type.go:168] "Request Body" body=""
	I1201 21:13:03.657090  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:03.657382  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.157107  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.157183  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.157524  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:04.657241  521964 type.go:168] "Request Body" body=""
	I1201 21:13:04.657321  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:04.657639  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:04.657698  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:05.156921  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.157001  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.157325  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:05.656994  521964 type.go:168] "Request Body" body=""
	I1201 21:13:05.657078  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:05.657437  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.157391  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.157477  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.157856  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:06.657298  521964 type.go:168] "Request Body" body=""
	I1201 21:13:06.657378  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:06.657684  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:06.657732  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:07.157498  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.157579  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.157929  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:07.657804  521964 type.go:168] "Request Body" body=""
	I1201 21:13:07.657885  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:07.658219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.157597  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.157669  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.157933  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:08.657711  521964 type.go:168] "Request Body" body=""
	I1201 21:13:08.657785  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:08.658162  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:08.658217  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:09.156936  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.157015  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.157375  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:09.657620  521964 type.go:168] "Request Body" body=""
	I1201 21:13:09.657765  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:09.658040  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.157874  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.157960  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.158354  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:10.656946  521964 type.go:168] "Request Body" body=""
	I1201 21:13:10.657024  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:10.657358  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:11.157610  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.157697  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.157986  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:11.158031  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:11.656893  521964 type.go:168] "Request Body" body=""
	I1201 21:13:11.656964  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:11.657296  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.157004  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.157081  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.157397  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:12.657679  521964 type.go:168] "Request Body" body=""
	I1201 21:13:12.657749  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:12.658023  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:13.157872  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.157950  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.158289  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:13.158341  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:13.656969  521964 type.go:168] "Request Body" body=""
	I1201 21:13:13.657052  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:13.657394  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.156916  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.156994  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.157319  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:14.656957  521964 type.go:168] "Request Body" body=""
	I1201 21:13:14.657034  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:14.657371  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.156988  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.157084  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.157470  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:15.656852  521964 type.go:168] "Request Body" body=""
	I1201 21:13:15.656945  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:15.657219  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1201 21:13:15.657269  521964 node_ready.go:55] error getting node "functional-198694" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-198694": dial tcp 192.168.49.2:8441: connect: connection refused
	I1201 21:13:16.157274  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.157357  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.157728  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:16.657690  521964 type.go:168] "Request Body" body=""
	I1201 21:13:16.657781  521964 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-198694" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1201 21:13:16.658180  521964 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1201 21:13:17.157176  521964 type.go:168] "Request Body" body=""
	I1201 21:13:17.157257  521964 node_ready.go:38] duration metric: took 6m0.000516111s for node "functional-198694" to be "Ready" ...
	I1201 21:13:17.164775  521964 out.go:203] 
	W1201 21:13:17.167674  521964 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1201 21:13:17.167697  521964 out.go:285] * 
	W1201 21:13:17.169852  521964 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:13:17.172668  521964 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:13:26 functional-198694 crio[5973]: time="2025-12-01T21:13:26.334736475Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=999f49b0-4d8d-487f-b88a-584f3d8d35c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:26 functional-198694 crio[5973]: time="2025-12-01T21:13:26.362325785Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e9e64604-29b3-4230-b132-48cbd4e67a88 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:26 functional-198694 crio[5973]: time="2025-12-01T21:13:26.362484739Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=e9e64604-29b3-4230-b132-48cbd4e67a88 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:26 functional-198694 crio[5973]: time="2025-12-01T21:13:26.362535519Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=e9e64604-29b3-4230-b132-48cbd4e67a88 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.507694356Z" level=info msg="Checking image status: minikube-local-cache-test:functional-198694" id=30062177-52f3-4ebc-ade7-ad4587233858 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.532768374Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-198694" id=5ddb31bb-ac5a-458c-b65b-c53b10e34ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.532927114Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-198694 not found" id=5ddb31bb-ac5a-458c-b65b-c53b10e34ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.532968163Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-198694 found" id=5ddb31bb-ac5a-458c-b65b-c53b10e34ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.558507537Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-198694" id=dd7c7415-feed-41b6-a009-1c6d4a510de4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.558653281Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-198694 not found" id=dd7c7415-feed-41b6-a009-1c6d4a510de4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:27 functional-198694 crio[5973]: time="2025-12-01T21:13:27.558695963Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-198694 found" id=dd7c7415-feed-41b6-a009-1c6d4a510de4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:28 functional-198694 crio[5973]: time="2025-12-01T21:13:28.409502634Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c25612e6-8ceb-43a3-888e-586b437d2001 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:28 functional-198694 crio[5973]: time="2025-12-01T21:13:28.750543623Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=ca4a6a51-43f6-42c4-8e30-4576c872fa28 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:28 functional-198694 crio[5973]: time="2025-12-01T21:13:28.750734871Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=ca4a6a51-43f6-42c4-8e30-4576c872fa28 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:28 functional-198694 crio[5973]: time="2025-12-01T21:13:28.750788252Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=ca4a6a51-43f6-42c4-8e30-4576c872fa28 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.321003044Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5933993d-142c-4792-8c0f-832fcd395510 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.32118013Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5933993d-142c-4792-8c0f-832fcd395510 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.321241716Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5933993d-142c-4792-8c0f-832fcd395510 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.371480366Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f4dda806-f97e-43ff-b429-2175d62d4212 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.371644095Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f4dda806-f97e-43ff-b429-2175d62d4212 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.371698715Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f4dda806-f97e-43ff-b429-2175d62d4212 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.398067293Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1f4fe85d-d48b-4888-942f-bef3d6dcc64a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.398200262Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1f4fe85d-d48b-4888-942f-bef3d6dcc64a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.398236454Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1f4fe85d-d48b-4888-942f-bef3d6dcc64a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:13:29 functional-198694 crio[5973]: time="2025-12-01T21:13:29.950677554Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5cc173f2-1c95-471a-b9d9-748ad92a53de name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:13:34.210399   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:34.211098   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:34.215435   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:34.216007   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:13:34.217237   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:13:34 up  2:56,  0 user,  load average: 0.52, 0.32, 0.60
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:13:31 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:32 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 01 21:13:32 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:32 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:32 functional-198694 kubelet[9943]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:32 functional-198694 kubelet[9943]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:32 functional-198694 kubelet[9943]: E1201 21:13:32.219510    9943 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:32 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:32 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:32 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 01 21:13:32 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:32 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:32 functional-198694 kubelet[9964]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:32 functional-198694 kubelet[9964]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:32 functional-198694 kubelet[9964]: E1201 21:13:32.992694    9964 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:32 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:32 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:13:33 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 01 21:13:33 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:33 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:13:33 functional-198694 kubelet[9984]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:33 functional-198694 kubelet[9984]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:13:33 functional-198694 kubelet[9984]: E1201 21:13:33.717689    9984 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:13:33 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:13:33 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (362.188643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (733.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-198694 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1201 21:15:34.916782  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:17:52.883949  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:18:37.986694  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:19:15.945559  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:20:34.916648  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:22:52.876149  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:25:34.916475  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-198694 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m11.452852856s)

                                                
                                                
-- stdout --
	* [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240491s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-198694 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m11.454257187s for "functional-198694" cluster.
I1201 21:25:46.748498  486002 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (314.989335ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-074555 image ls --format yaml --alsologtostderr                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh     │ functional-074555 ssh pgrep buildkitd                                                                                                             │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ image   │ functional-074555 image ls --format json --alsologtostderr                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls --format table --alsologtostderr                                                                                       │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr                                            │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls                                                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ delete  │ -p functional-074555                                                                                                                              │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ start   │ -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ start   │ -p functional-198694 --alsologtostderr -v=8                                                                                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:07 UTC │                     │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:latest                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add minikube-local-cache-test:functional-198694                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache delete minikube-local-cache-test:functional-198694                                                                        │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl images                                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	│ cache   │ functional-198694 cache reload                                                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ kubectl │ functional-198694 kubectl -- --context functional-198694 get pods                                                                                 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	│ start   │ -p functional-198694 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:13:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:13:35.338314  527777 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:13:35.338426  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.338431  527777 out.go:374] Setting ErrFile to fd 2...
	I1201 21:13:35.338435  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.339011  527777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:13:35.339669  527777 out.go:368] Setting JSON to false
	I1201 21:13:35.340628  527777 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10565,"bootTime":1764613051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:13:35.340767  527777 start.go:143] virtualization:  
	I1201 21:13:35.344231  527777 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:13:35.348003  527777 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:13:35.348182  527777 notify.go:221] Checking for updates...
	I1201 21:13:35.353585  527777 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:13:35.356421  527777 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:13:35.359084  527777 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:13:35.361859  527777 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:13:35.364606  527777 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:13:35.367906  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:35.368004  527777 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:13:35.404299  527777 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:13:35.404422  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.463515  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.453981974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.463609  527777 docker.go:319] overlay module found
	I1201 21:13:35.466875  527777 out.go:179] * Using the docker driver based on existing profile
	I1201 21:13:35.469781  527777 start.go:309] selected driver: docker
	I1201 21:13:35.469793  527777 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.469882  527777 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:13:35.469988  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.530406  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.520549629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.530815  527777 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 21:13:35.530841  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:35.530897  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:35.530938  527777 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.534086  527777 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:13:35.536995  527777 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:13:35.539929  527777 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:13:35.542786  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:35.542873  527777 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:13:35.563189  527777 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:13:35.563200  527777 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:13:35.608993  527777 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:13:35.806403  527777 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:13:35.806571  527777 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:13:35.806600  527777 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806692  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:13:35.806702  527777 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.653µs
	I1201 21:13:35.806710  527777 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:13:35.806721  527777 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806753  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:13:35.806758  527777 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 38.825µs
	I1201 21:13:35.806764  527777 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806774  527777 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806815  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:13:35.806831  527777 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 48.901µs
	I1201 21:13:35.806838  527777 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806850  527777 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:13:35.806851  527777 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806885  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:13:35.806880  527777 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806893  527777 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 44.405µs
	I1201 21:13:35.806899  527777 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806914  527777 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806939  527777 start.go:364] duration metric: took 38.547µs to acquireMachinesLock for "functional-198694"
	I1201 21:13:35.806944  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:13:35.806949  527777 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 42.124µs
	I1201 21:13:35.806954  527777 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806962  527777 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:13:35.806968  527777 fix.go:54] fixHost starting: 
	I1201 21:13:35.806963  527777 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806991  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:13:35.806995  527777 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.558µs
	I1201 21:13:35.807007  527777 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:13:35.807016  527777 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807045  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:13:35.807049  527777 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.657µs
	I1201 21:13:35.807054  527777 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:13:35.807062  527777 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807089  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:13:35.807094  527777 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.54µs
	I1201 21:13:35.807099  527777 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:13:35.807107  527777 cache.go:87] Successfully saved all images to host disk.
	I1201 21:13:35.807314  527777 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:13:35.826290  527777 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:13:35.826315  527777 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:13:35.829729  527777 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:13:35.829761  527777 machine.go:94] provisionDockerMachine start ...
	I1201 21:13:35.829853  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:35.849270  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:35.849646  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:35.849655  527777 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:13:36.014195  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.014211  527777 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:13:36.014280  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.035339  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.035672  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.035681  527777 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:13:36.197202  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.197287  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.217632  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.217935  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.217948  527777 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:13:36.367610  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:13:36.367629  527777 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:13:36.367658  527777 ubuntu.go:190] setting up certificates
	I1201 21:13:36.367666  527777 provision.go:84] configureAuth start
	I1201 21:13:36.367747  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:36.387555  527777 provision.go:143] copyHostCerts
	I1201 21:13:36.387627  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:13:36.387641  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:13:36.387724  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:13:36.387835  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:13:36.387840  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:13:36.387866  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:13:36.387928  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:13:36.387933  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:13:36.387959  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:13:36.388014  527777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:13:36.864413  527777 provision.go:177] copyRemoteCerts
	I1201 21:13:36.864488  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:13:36.864542  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.883147  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:36.987572  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:13:37.015924  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:13:37.037590  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 21:13:37.056483  527777 provision.go:87] duration metric: took 688.787749ms to configureAuth
	I1201 21:13:37.056502  527777 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:13:37.056696  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:37.056802  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.075104  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:37.075454  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:37.075468  527777 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:13:37.432424  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:13:37.432439  527777 machine.go:97] duration metric: took 1.602671146s to provisionDockerMachine
	I1201 21:13:37.432451  527777 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:13:37.432466  527777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:13:37.432544  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:13:37.432606  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.457485  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.563609  527777 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:13:37.567292  527777 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:13:37.567310  527777 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:13:37.567329  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:13:37.567430  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:13:37.567517  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:13:37.567613  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:13:37.567670  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:13:37.575725  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:37.593481  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:13:37.611620  527777 start.go:296] duration metric: took 179.151488ms for postStartSetup
	I1201 21:13:37.611718  527777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:13:37.611798  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.629587  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.732362  527777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:13:37.737388  527777 fix.go:56] duration metric: took 1.930412863s for fixHost
	I1201 21:13:37.737414  527777 start.go:83] releasing machines lock for "functional-198694", held for 1.930466515s
	I1201 21:13:37.737492  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:37.754641  527777 ssh_runner.go:195] Run: cat /version.json
	I1201 21:13:37.754685  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.754954  527777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:13:37.755010  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.773486  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.787845  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.875124  527777 ssh_runner.go:195] Run: systemctl --version
	I1201 21:13:37.974016  527777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:13:38.017000  527777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 21:13:38.021875  527777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:13:38.021957  527777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:13:38.031594  527777 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:13:38.031622  527777 start.go:496] detecting cgroup driver to use...
	I1201 21:13:38.031660  527777 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:13:38.031747  527777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:13:38.049187  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:13:38.064637  527777 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:13:38.064721  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:13:38.083239  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:13:38.097453  527777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:13:38.249215  527777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:13:38.371691  527777 docker.go:234] disabling docker service ...
	I1201 21:13:38.371769  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:13:38.388782  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:13:38.402306  527777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:13:38.513914  527777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:13:38.630153  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:13:38.644475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:13:38.658966  527777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:13:38.659023  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.668135  527777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:13:38.668192  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.677509  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.686682  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.695781  527777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:13:38.704147  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.713420  527777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.722196  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.731481  527777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:13:38.740144  527777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:13:38.748176  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:38.858298  527777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:13:39.035375  527777 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:13:39.035464  527777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:13:39.039668  527777 start.go:564] Will wait 60s for crictl version
	I1201 21:13:39.039730  527777 ssh_runner.go:195] Run: which crictl
	I1201 21:13:39.043260  527777 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:13:39.078386  527777 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:13:39.078499  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.110667  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.146750  527777 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:13:39.149800  527777 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:13:39.166717  527777 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:13:39.173972  527777 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1201 21:13:39.176755  527777 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:13:39.176898  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:39.176968  527777 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:13:39.210945  527777 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:13:39.210958  527777 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:13:39.210965  527777 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:13:39.211070  527777 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:13:39.211187  527777 ssh_runner.go:195] Run: crio config
	I1201 21:13:39.284437  527777 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1201 21:13:39.284481  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:39.284491  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:39.284499  527777 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:13:39.284522  527777 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:13:39.284675  527777 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:13:39.284759  527777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:13:39.293198  527777 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:13:39.293275  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:13:39.301290  527777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:13:39.315108  527777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:13:39.329814  527777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1201 21:13:39.343669  527777 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:13:39.347900  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:39.461077  527777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:13:39.654352  527777 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:13:39.654364  527777 certs.go:195] generating shared ca certs ...
	I1201 21:13:39.654379  527777 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:13:39.654515  527777 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:13:39.654555  527777 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:13:39.654570  527777 certs.go:257] generating profile certs ...
	I1201 21:13:39.654666  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:13:39.654727  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:13:39.654771  527777 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:13:39.654890  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:13:39.654921  527777 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:13:39.654928  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:13:39.654965  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:13:39.655015  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:13:39.655038  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:13:39.655084  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:39.655762  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:13:39.683427  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:13:39.704542  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:13:39.724282  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:13:39.744046  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:13:39.765204  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:13:39.784677  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:13:39.803885  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:13:39.822965  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:13:39.842026  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:13:39.860451  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:13:39.879380  527777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:13:39.893847  527777 ssh_runner.go:195] Run: openssl version
	I1201 21:13:39.900456  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:13:39.910454  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914599  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914672  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.957573  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:13:39.966576  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:13:39.976178  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980649  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980729  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:13:40.025575  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:13:40.037195  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:13:40.047283  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051903  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051976  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.094396  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:13:40.103155  527777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:13:40.107392  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:13:40.150081  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:13:40.192825  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:13:40.234772  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:13:40.276722  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:13:40.318487  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:13:40.360912  527777 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:40.361001  527777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:13:40.361062  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.390972  527777 cri.go:89] found id: ""
	I1201 21:13:40.391046  527777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:13:40.399343  527777 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:13:40.399354  527777 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:13:40.399410  527777 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:13:40.407260  527777 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.407785  527777 kubeconfig.go:125] found "functional-198694" server: "https://192.168.49.2:8441"
	I1201 21:13:40.409130  527777 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:13:40.418081  527777 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-01 20:59:03.175067800 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-01 21:13:39.337074315 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1201 21:13:40.418090  527777 kubeadm.go:1161] stopping kube-system containers ...
	I1201 21:13:40.418103  527777 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 21:13:40.418160  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.458573  527777 cri.go:89] found id: ""
	I1201 21:13:40.458639  527777 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 21:13:40.477506  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:13:40.486524  527777 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  1 21:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  1 21:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  1 21:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  1 21:03 /etc/kubernetes/scheduler.conf
	
	I1201 21:13:40.486611  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:13:40.494590  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:13:40.502887  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.502952  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:13:40.511354  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.519815  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.519872  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.528897  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:13:40.537744  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.537819  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:13:40.546165  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:13:40.555103  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:40.603848  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:41.842196  527777 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.238322261s)
	I1201 21:13:41.842271  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.059194  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.130722  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.199813  527777 api_server.go:52] waiting for apiserver process to appear ...
	I1201 21:13:42.199901  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:42.700072  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.200731  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.700027  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.200776  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.700945  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.200498  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.700869  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.200358  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.700900  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.200833  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.700432  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.200342  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.700205  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.200031  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.700873  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.200171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.700532  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.199969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.700026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.200123  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.700046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.200038  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.700680  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.700097  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.200910  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.700336  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.200957  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.700757  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.200131  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.700100  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.200357  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.700032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.200053  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.700687  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.202701  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.700294  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.200032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.700969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.200893  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.700398  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.200784  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.701004  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.200950  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.200806  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.700896  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.200904  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.700082  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.200046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.700894  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.200914  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.700874  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.200345  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.700662  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.200989  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.700974  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.200085  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.200389  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.200064  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.700099  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.200140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.699984  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.200508  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.700076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.200220  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.200107  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.201026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.700092  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.200816  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.700821  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.200768  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.700817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.200081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.700135  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.200076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.700140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.200109  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.700040  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.700221  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.200360  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.700585  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.200737  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.700431  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.200635  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.699983  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.200340  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.700127  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.200075  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.700352  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.200740  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.700086  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.200338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.200785  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.700903  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.200627  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.700920  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.700285  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.200800  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.200091  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.700843  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.200016  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.700190  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.700171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.200767  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.700973  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.200048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.700746  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.200808  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.700037  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:42.200288  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:42.200384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:42.231074  527777 cri.go:89] found id: ""
	I1201 21:14:42.231090  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.231099  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:42.231105  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:42.231205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:42.260877  527777 cri.go:89] found id: ""
	I1201 21:14:42.260892  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.260900  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:42.260906  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:42.260972  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:42.290930  527777 cri.go:89] found id: ""
	I1201 21:14:42.290944  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.290953  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:42.290960  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:42.291034  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:42.323761  527777 cri.go:89] found id: ""
	I1201 21:14:42.323776  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.323784  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:42.323790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:42.323870  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:42.356722  527777 cri.go:89] found id: ""
	I1201 21:14:42.356738  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.356748  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:42.356756  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:42.356820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:42.387639  527777 cri.go:89] found id: ""
	I1201 21:14:42.387654  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.387661  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:42.387667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:42.387738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:42.433777  527777 cri.go:89] found id: ""
	I1201 21:14:42.433791  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.433798  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:42.433806  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:42.433815  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:42.520716  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:42.520743  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:42.536803  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:42.536820  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:42.605090  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:42.605114  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:42.605125  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:42.679935  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:42.679957  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:45.213941  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:45.229905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:45.229984  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:45.276158  527777 cri.go:89] found id: ""
	I1201 21:14:45.276174  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.276181  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:45.276187  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:45.276259  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:45.307844  527777 cri.go:89] found id: ""
	I1201 21:14:45.307859  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.307867  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:45.307872  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:45.307946  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:45.339831  527777 cri.go:89] found id: ""
	I1201 21:14:45.339845  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.339853  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:45.339858  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:45.339922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:45.371617  527777 cri.go:89] found id: ""
	I1201 21:14:45.371632  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.371640  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:45.371646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:45.371705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:45.399984  527777 cri.go:89] found id: ""
	I1201 21:14:45.400005  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.400012  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:45.400017  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:45.400086  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:45.441742  527777 cri.go:89] found id: ""
	I1201 21:14:45.441755  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.441763  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:45.441769  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:45.441843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:45.474201  527777 cri.go:89] found id: ""
	I1201 21:14:45.474216  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.474223  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:45.474231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:45.474241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:45.541899  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:45.541920  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:45.557525  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:45.557541  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:45.623123  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:45.623165  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:45.623176  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:45.703324  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:45.703344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.232324  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:48.242709  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:48.242767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:48.273768  527777 cri.go:89] found id: ""
	I1201 21:14:48.273782  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.273790  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:48.273795  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:48.273853  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:48.305133  527777 cri.go:89] found id: ""
	I1201 21:14:48.305147  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.305154  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:48.305159  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:48.305218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:48.331706  527777 cri.go:89] found id: ""
	I1201 21:14:48.331720  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.331727  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:48.331733  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:48.331805  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:48.357401  527777 cri.go:89] found id: ""
	I1201 21:14:48.357414  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.357421  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:48.357426  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:48.357485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:48.382601  527777 cri.go:89] found id: ""
	I1201 21:14:48.382615  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.382622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:48.382627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:48.382685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:48.414103  527777 cri.go:89] found id: ""
	I1201 21:14:48.414117  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.414124  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:48.414130  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:48.414192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:48.444275  527777 cri.go:89] found id: ""
	I1201 21:14:48.444289  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.444296  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:48.444304  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:48.444315  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:48.509613  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:48.509633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:48.509645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:48.583849  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:48.583868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.611095  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:48.611113  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:48.678045  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:48.678067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.193681  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:51.204158  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:51.204220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:51.228546  527777 cri.go:89] found id: ""
	I1201 21:14:51.228560  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.228567  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:51.228573  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:51.228641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:51.253363  527777 cri.go:89] found id: ""
	I1201 21:14:51.253377  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.253384  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:51.253389  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:51.253450  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:51.281388  527777 cri.go:89] found id: ""
	I1201 21:14:51.281403  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.281410  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:51.281415  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:51.281472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:51.312321  527777 cri.go:89] found id: ""
	I1201 21:14:51.312334  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.312341  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:51.312347  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:51.312404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:51.338071  527777 cri.go:89] found id: ""
	I1201 21:14:51.338084  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.338092  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:51.338097  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:51.338160  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:51.362911  527777 cri.go:89] found id: ""
	I1201 21:14:51.362925  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.362932  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:51.362938  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:51.362996  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:51.392560  527777 cri.go:89] found id: ""
	I1201 21:14:51.392575  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.392582  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:51.392589  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:51.392600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:51.462446  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:51.462465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.483328  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:51.483345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:51.550537  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:51.550546  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:51.550556  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:51.627463  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:51.627484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:54.160747  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:54.171038  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:54.171098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:54.197306  527777 cri.go:89] found id: ""
	I1201 21:14:54.197320  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.197327  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:54.197333  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:54.197389  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:54.227205  527777 cri.go:89] found id: ""
	I1201 21:14:54.227219  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.227226  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:54.227232  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:54.227293  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:54.254126  527777 cri.go:89] found id: ""
	I1201 21:14:54.254141  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.254149  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:54.254156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:54.254218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:54.282152  527777 cri.go:89] found id: ""
	I1201 21:14:54.282166  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.282173  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:54.282178  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:54.282234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:54.312220  527777 cri.go:89] found id: ""
	I1201 21:14:54.312234  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.312241  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:54.312246  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:54.312314  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:54.338233  527777 cri.go:89] found id: ""
	I1201 21:14:54.338247  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.338253  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:54.338259  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:54.338317  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:54.364068  527777 cri.go:89] found id: ""
	I1201 21:14:54.364082  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.364089  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:54.364097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:54.364119  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:54.429655  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:54.429673  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:54.445696  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:54.445712  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:54.514079  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:54.514090  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:54.514100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:54.590504  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:54.590526  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.119842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:57.129802  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:57.129862  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:57.154250  527777 cri.go:89] found id: ""
	I1201 21:14:57.154263  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.154271  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:57.154276  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:57.154332  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:57.179738  527777 cri.go:89] found id: ""
	I1201 21:14:57.179761  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.179768  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:57.179775  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:57.179838  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:57.209881  527777 cri.go:89] found id: ""
	I1201 21:14:57.209895  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.209902  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:57.209907  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:57.209964  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:57.239761  527777 cri.go:89] found id: ""
	I1201 21:14:57.239775  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.239782  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:57.239787  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:57.239851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:57.265438  527777 cri.go:89] found id: ""
	I1201 21:14:57.265457  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.265464  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:57.265470  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:57.265531  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:57.292117  527777 cri.go:89] found id: ""
	I1201 21:14:57.292131  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.292139  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:57.292145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:57.292211  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:57.321507  527777 cri.go:89] found id: ""
	I1201 21:14:57.321526  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.321539  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:57.321547  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:57.321562  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.355489  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:57.355506  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:57.422253  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:57.422274  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:57.439866  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:57.439884  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:57.517974  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:57.517984  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:57.517997  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.095116  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:00.167383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:00.167484  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:00.305857  527777 cri.go:89] found id: ""
	I1201 21:15:00.305874  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.305881  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:00.305888  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:00.305960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:00.412948  527777 cri.go:89] found id: ""
	I1201 21:15:00.412964  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.412972  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:00.412979  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:00.413063  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:00.497486  527777 cri.go:89] found id: ""
	I1201 21:15:00.497503  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.497511  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:00.497517  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:00.497588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:00.548544  527777 cri.go:89] found id: ""
	I1201 21:15:00.548558  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.548565  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:00.548571  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:00.548635  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:00.594658  527777 cri.go:89] found id: ""
	I1201 21:15:00.594674  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.594682  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:00.594688  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:00.594758  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:00.625642  527777 cri.go:89] found id: ""
	I1201 21:15:00.625658  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.625665  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:00.625672  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:00.625741  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:00.657944  527777 cri.go:89] found id: ""
	I1201 21:15:00.657968  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.657977  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:00.657987  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:00.657999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:00.741394  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:00.741407  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:00.741425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.821320  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:00.821344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:00.857348  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:00.857380  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:00.927631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:00.927652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.446387  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:03.456673  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:03.456742  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:03.481752  527777 cri.go:89] found id: ""
	I1201 21:15:03.481766  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.481773  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:03.481779  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:03.481837  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:03.509959  527777 cri.go:89] found id: ""
	I1201 21:15:03.509974  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.509982  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:03.509987  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:03.510050  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:03.536645  527777 cri.go:89] found id: ""
	I1201 21:15:03.536659  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.536665  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:03.536671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:03.536738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:03.562917  527777 cri.go:89] found id: ""
	I1201 21:15:03.562932  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.562939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:03.562945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:03.563005  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:03.589891  527777 cri.go:89] found id: ""
	I1201 21:15:03.589905  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.589912  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:03.589918  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:03.589977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:03.622362  527777 cri.go:89] found id: ""
	I1201 21:15:03.622376  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.622384  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:03.622390  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:03.622451  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:03.649882  527777 cri.go:89] found id: ""
	I1201 21:15:03.649897  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.649904  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:03.649912  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:03.649922  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:03.726812  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:03.726832  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.741643  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:03.741659  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:03.807830  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:03.807840  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:03.807851  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:03.882248  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:03.882268  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.412792  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:06.423457  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:06.423520  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:06.450416  527777 cri.go:89] found id: ""
	I1201 21:15:06.450434  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.450441  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:06.450461  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:06.450552  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:06.476229  527777 cri.go:89] found id: ""
	I1201 21:15:06.476243  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.476251  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:06.476257  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:06.476313  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:06.504311  527777 cri.go:89] found id: ""
	I1201 21:15:06.504326  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.504333  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:06.504339  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:06.504400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:06.531500  527777 cri.go:89] found id: ""
	I1201 21:15:06.531515  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.531523  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:06.531529  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:06.531598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:06.557205  527777 cri.go:89] found id: ""
	I1201 21:15:06.557219  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.557226  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:06.557231  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:06.557296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:06.583224  527777 cri.go:89] found id: ""
	I1201 21:15:06.583237  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.583244  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:06.583250  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:06.583309  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:06.609560  527777 cri.go:89] found id: ""
	I1201 21:15:06.609574  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.609581  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:06.609589  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:06.609600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:06.688119  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:06.688138  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.718171  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:06.718187  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:06.788360  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:06.788382  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:06.803516  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:06.803532  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:06.871576  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.373262  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:09.384129  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:09.384191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:09.415353  527777 cri.go:89] found id: ""
	I1201 21:15:09.415369  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.415377  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:09.415384  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:09.415449  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:09.441666  527777 cri.go:89] found id: ""
	I1201 21:15:09.441681  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.441689  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:09.441707  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:09.441773  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:09.468735  527777 cri.go:89] found id: ""
	I1201 21:15:09.468749  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.468756  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:09.468761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:09.468820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:09.495871  527777 cri.go:89] found id: ""
	I1201 21:15:09.495885  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.495892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:09.495898  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:09.495960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:09.522124  527777 cri.go:89] found id: ""
	I1201 21:15:09.522138  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.522145  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:09.522151  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:09.522222  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:09.548540  527777 cri.go:89] found id: ""
	I1201 21:15:09.548554  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.548562  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:09.548568  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:09.548628  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:09.581799  527777 cri.go:89] found id: ""
	I1201 21:15:09.581814  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.581823  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:09.581831  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:09.581842  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:09.653172  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:09.653196  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:09.668649  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:09.668666  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:09.742062  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.742072  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:09.742085  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:09.817239  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:09.817259  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.348410  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:12.358969  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:12.359036  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:12.384762  527777 cri.go:89] found id: ""
	I1201 21:15:12.384776  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.384783  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:12.384788  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:12.384849  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:12.411423  527777 cri.go:89] found id: ""
	I1201 21:15:12.411437  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.411444  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:12.411449  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:12.411508  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:12.436624  527777 cri.go:89] found id: ""
	I1201 21:15:12.436638  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.436645  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:12.436650  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:12.436708  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:12.462632  527777 cri.go:89] found id: ""
	I1201 21:15:12.462647  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.462654  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:12.462661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:12.462724  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:12.488511  527777 cri.go:89] found id: ""
	I1201 21:15:12.488526  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.488537  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:12.488542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:12.488601  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:12.514421  527777 cri.go:89] found id: ""
	I1201 21:15:12.514434  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.514441  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:12.514448  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:12.514513  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:12.541557  527777 cri.go:89] found id: ""
	I1201 21:15:12.541571  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.541579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:12.541587  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:12.541598  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.573231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:12.573249  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:12.641686  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:12.641707  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:12.658713  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:12.658727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:12.743144  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:12.743155  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:12.743166  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.318465  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:15.329023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:15.329088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:15.358063  527777 cri.go:89] found id: ""
	I1201 21:15:15.358077  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.358084  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:15.358090  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:15.358148  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:15.387949  527777 cri.go:89] found id: ""
	I1201 21:15:15.387963  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.387971  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:15.387976  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:15.388040  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:15.414396  527777 cri.go:89] found id: ""
	I1201 21:15:15.414412  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.414420  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:15.414425  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:15.414489  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:15.440368  527777 cri.go:89] found id: ""
	I1201 21:15:15.440383  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.440390  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:15.440396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:15.440455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:15.471515  527777 cri.go:89] found id: ""
	I1201 21:15:15.471529  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.471538  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:15.471544  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:15.471605  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:15.502736  527777 cri.go:89] found id: ""
	I1201 21:15:15.502750  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.502764  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:15.502770  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:15.502834  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:15.530525  527777 cri.go:89] found id: ""
	I1201 21:15:15.530540  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.530548  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:15.530555  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:15.530566  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:15.597211  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:15.597221  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:15.597232  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.673960  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:15.673983  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:15.708635  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:15.708651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:15.779672  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:15.779693  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.296490  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:18.307184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:18.307258  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:18.340992  527777 cri.go:89] found id: ""
	I1201 21:15:18.341006  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.341021  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:18.341027  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:18.341093  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:18.370602  527777 cri.go:89] found id: ""
	I1201 21:15:18.370626  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.370633  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:18.370642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:18.370713  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:18.398425  527777 cri.go:89] found id: ""
	I1201 21:15:18.398440  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.398447  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:18.398453  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:18.398527  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:18.424514  527777 cri.go:89] found id: ""
	I1201 21:15:18.424530  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.424537  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:18.424561  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:18.424641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:18.451718  527777 cri.go:89] found id: ""
	I1201 21:15:18.451732  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.451740  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:18.451746  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:18.451806  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:18.481779  527777 cri.go:89] found id: ""
	I1201 21:15:18.481804  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.481812  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:18.481818  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:18.481885  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:18.509744  527777 cri.go:89] found id: ""
	I1201 21:15:18.509760  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.509767  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:18.509775  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:18.509800  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:18.541318  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:18.541335  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:18.608586  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:18.608608  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.625859  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:18.625885  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:18.721362  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:18.721371  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:18.721383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.298842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:21.309420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:21.309481  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:21.339650  527777 cri.go:89] found id: ""
	I1201 21:15:21.339664  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.339672  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:21.339678  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:21.339739  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:21.369828  527777 cri.go:89] found id: ""
	I1201 21:15:21.369843  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.369850  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:21.369857  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:21.369925  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:21.396833  527777 cri.go:89] found id: ""
	I1201 21:15:21.396860  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.396868  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:21.396874  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:21.396948  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:21.423340  527777 cri.go:89] found id: ""
	I1201 21:15:21.423354  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.423363  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:21.423369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:21.423429  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:21.450028  527777 cri.go:89] found id: ""
	I1201 21:15:21.450041  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.450051  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:21.450057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:21.450115  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:21.476290  527777 cri.go:89] found id: ""
	I1201 21:15:21.476305  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.476312  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:21.476317  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:21.476378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:21.503570  527777 cri.go:89] found id: ""
	I1201 21:15:21.503591  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.503599  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:21.503607  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:21.503622  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:21.518970  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:21.518995  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:21.583522  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:21.583581  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:21.583592  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.662707  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:21.662730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:21.693467  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:21.693484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.268299  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:24.279383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:24.279455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:24.305720  527777 cri.go:89] found id: ""
	I1201 21:15:24.305733  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.305741  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:24.305746  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:24.305809  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:24.333862  527777 cri.go:89] found id: ""
	I1201 21:15:24.333878  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.333885  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:24.333891  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:24.333965  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:24.365916  527777 cri.go:89] found id: ""
	I1201 21:15:24.365931  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.365939  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:24.365948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:24.366009  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:24.393185  527777 cri.go:89] found id: ""
	I1201 21:15:24.393202  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.393209  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:24.393216  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:24.393279  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:24.419532  527777 cri.go:89] found id: ""
	I1201 21:15:24.419547  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.419554  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:24.419560  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:24.419629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:24.445529  527777 cri.go:89] found id: ""
	I1201 21:15:24.445543  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.445550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:24.445557  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:24.445619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:24.470988  527777 cri.go:89] found id: ""
	I1201 21:15:24.471002  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.471009  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:24.471017  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:24.471028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:24.500416  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:24.500433  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.566009  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:24.566028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:24.582350  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:24.582366  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:24.653085  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:24.653095  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:24.653106  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:27.239323  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:27.250432  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:27.250495  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:27.276796  527777 cri.go:89] found id: ""
	I1201 21:15:27.276824  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.276832  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:27.276837  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:27.276927  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:27.303592  527777 cri.go:89] found id: ""
	I1201 21:15:27.303607  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.303614  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:27.303620  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:27.303685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:27.330141  527777 cri.go:89] found id: ""
	I1201 21:15:27.330155  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.330163  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:27.330168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:27.330231  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:27.358477  527777 cri.go:89] found id: ""
	I1201 21:15:27.358491  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.358498  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:27.358503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:27.358570  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:27.384519  527777 cri.go:89] found id: ""
	I1201 21:15:27.384533  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.384541  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:27.384547  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:27.384610  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:27.410788  527777 cri.go:89] found id: ""
	I1201 21:15:27.410804  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.410811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:27.410817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:27.410880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:27.437727  527777 cri.go:89] found id: ""
	I1201 21:15:27.437742  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.437748  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:27.437756  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:27.437766  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:27.470359  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:27.470376  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:27.540219  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:27.540239  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:27.558165  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:27.558184  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:27.631990  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:27.632001  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:27.632013  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:30.214048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:30.225906  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:30.225977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:30.254528  527777 cri.go:89] found id: ""
	I1201 21:15:30.254544  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.254552  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:30.254559  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:30.254627  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:30.282356  527777 cri.go:89] found id: ""
	I1201 21:15:30.282371  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.282379  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:30.282385  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:30.282454  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:30.316244  527777 cri.go:89] found id: ""
	I1201 21:15:30.316266  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.316275  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:30.316281  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:30.316356  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:30.349310  527777 cri.go:89] found id: ""
	I1201 21:15:30.349324  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.349338  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:30.349345  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:30.349413  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:30.379233  527777 cri.go:89] found id: ""
	I1201 21:15:30.379259  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.379267  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:30.379273  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:30.379344  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:30.410578  527777 cri.go:89] found id: ""
	I1201 21:15:30.410592  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.410600  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:30.410607  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:30.410715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:30.439343  527777 cri.go:89] found id: ""
	I1201 21:15:30.439357  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.439365  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:30.439373  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:30.439383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:30.469722  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:30.469742  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:30.536977  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:30.536999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:30.552719  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:30.552738  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:30.625200  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:30.625210  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:30.625221  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.202525  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:33.213081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:33.213144  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:33.239684  527777 cri.go:89] found id: ""
	I1201 21:15:33.239699  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.239707  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:33.239713  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:33.239777  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:33.270046  527777 cri.go:89] found id: ""
	I1201 21:15:33.270060  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.270067  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:33.270073  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:33.270134  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:33.298615  527777 cri.go:89] found id: ""
	I1201 21:15:33.298631  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.298639  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:33.298646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:33.298715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:33.330389  527777 cri.go:89] found id: ""
	I1201 21:15:33.330403  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.330410  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:33.330416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:33.330472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:33.356054  527777 cri.go:89] found id: ""
	I1201 21:15:33.356068  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.356075  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:33.356081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:33.356147  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:33.385771  527777 cri.go:89] found id: ""
	I1201 21:15:33.385784  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.385792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:33.385797  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:33.385852  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:33.412562  527777 cri.go:89] found id: ""
	I1201 21:15:33.412580  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.412587  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:33.412601  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:33.412616  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:33.478848  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:33.478868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:33.494280  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:33.494296  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:33.574855  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:33.574866  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:33.574876  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.653087  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:33.653110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:36.198878  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:36.209291  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:36.209352  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:36.234666  527777 cri.go:89] found id: ""
	I1201 21:15:36.234679  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.234686  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:36.234691  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:36.234747  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:36.260740  527777 cri.go:89] found id: ""
	I1201 21:15:36.260754  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.260762  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:36.260767  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:36.260830  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:36.290674  527777 cri.go:89] found id: ""
	I1201 21:15:36.290688  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.290695  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:36.290700  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:36.290800  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:36.317381  527777 cri.go:89] found id: ""
	I1201 21:15:36.317396  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.317404  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:36.317410  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:36.317477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:36.346371  527777 cri.go:89] found id: ""
	I1201 21:15:36.346384  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.346391  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:36.346396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:36.346458  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:36.374545  527777 cri.go:89] found id: ""
	I1201 21:15:36.374559  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.374567  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:36.374573  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:36.374632  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:36.400298  527777 cri.go:89] found id: ""
	I1201 21:15:36.400324  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.400332  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:36.400339  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:36.400350  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:36.468826  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:36.468850  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:36.484335  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:36.484351  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:36.549841  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:36.549853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:36.549864  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:36.630562  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:36.630587  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:39.169136  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:39.182222  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:39.182296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:39.212188  527777 cri.go:89] found id: ""
	I1201 21:15:39.212202  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.212208  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:39.212213  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:39.212270  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:39.237215  527777 cri.go:89] found id: ""
	I1201 21:15:39.237229  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.237236  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:39.237241  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:39.237298  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:39.262205  527777 cri.go:89] found id: ""
	I1201 21:15:39.262219  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.262226  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:39.262232  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:39.262288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:39.290471  527777 cri.go:89] found id: ""
	I1201 21:15:39.290485  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.290492  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:39.290498  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:39.290559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:39.316212  527777 cri.go:89] found id: ""
	I1201 21:15:39.316238  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.316245  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:39.316251  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:39.316329  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:39.341014  527777 cri.go:89] found id: ""
	I1201 21:15:39.341037  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.341045  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:39.341051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:39.341109  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:39.375032  527777 cri.go:89] found id: ""
	I1201 21:15:39.375058  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.375067  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:39.375083  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:39.375093  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:39.447422  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:39.447444  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:39.462737  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:39.462754  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:39.534298  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:39.534310  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:39.534320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:39.611187  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:39.611208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.146214  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:42.159004  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:42.159073  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:42.195922  527777 cri.go:89] found id: ""
	I1201 21:15:42.195938  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.195946  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:42.195952  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:42.196022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:42.230178  527777 cri.go:89] found id: ""
	I1201 21:15:42.230193  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.230200  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:42.230206  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:42.230271  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:42.261082  527777 cri.go:89] found id: ""
	I1201 21:15:42.261098  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.261105  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:42.261111  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:42.261188  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:42.295345  527777 cri.go:89] found id: ""
	I1201 21:15:42.295361  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.295377  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:42.295383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:42.295457  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:42.330093  527777 cri.go:89] found id: ""
	I1201 21:15:42.330109  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.330116  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:42.330122  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:42.330186  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:42.358733  527777 cri.go:89] found id: ""
	I1201 21:15:42.358748  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.358756  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:42.358761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:42.358823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:42.388218  527777 cri.go:89] found id: ""
	I1201 21:15:42.388233  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.388240  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:42.388247  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:42.388258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:42.469165  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:42.469185  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.500328  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:42.500345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:42.569622  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:42.569642  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:42.585628  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:42.585645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:42.654077  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.155990  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:45.177587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:45.177664  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:45.216123  527777 cri.go:89] found id: ""
	I1201 21:15:45.216141  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.216149  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:45.216155  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:45.216241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:45.257016  527777 cri.go:89] found id: ""
	I1201 21:15:45.257036  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.257044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:45.257053  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:45.257139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:45.310072  527777 cri.go:89] found id: ""
	I1201 21:15:45.310087  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.310095  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:45.310101  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:45.310165  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:45.339040  527777 cri.go:89] found id: ""
	I1201 21:15:45.339054  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.339062  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:45.339068  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:45.339154  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:45.370340  527777 cri.go:89] found id: ""
	I1201 21:15:45.370354  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.370361  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:45.370366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:45.370426  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:45.396213  527777 cri.go:89] found id: ""
	I1201 21:15:45.396227  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.396234  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:45.396240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:45.396299  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:45.423726  527777 cri.go:89] found id: ""
	I1201 21:15:45.423745  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.423755  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:45.423773  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:45.423784  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:45.490150  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.490161  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:45.490172  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:45.565908  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:45.565926  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:45.598740  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:45.598755  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:45.666263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:45.666281  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.183348  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:48.193996  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:48.194068  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:48.221096  527777 cri.go:89] found id: ""
	I1201 21:15:48.221110  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.221117  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:48.221123  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:48.221180  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:48.247305  527777 cri.go:89] found id: ""
	I1201 21:15:48.247320  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.247328  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:48.247333  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:48.247392  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:48.277432  527777 cri.go:89] found id: ""
	I1201 21:15:48.277447  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.277453  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:48.277459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:48.277521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:48.304618  527777 cri.go:89] found id: ""
	I1201 21:15:48.304636  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.304643  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:48.304649  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:48.304712  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:48.331672  527777 cri.go:89] found id: ""
	I1201 21:15:48.331686  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.331694  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:48.331699  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:48.331757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:48.360554  527777 cri.go:89] found id: ""
	I1201 21:15:48.360569  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.360577  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:48.360583  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:48.360640  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:48.385002  527777 cri.go:89] found id: ""
	I1201 21:15:48.385016  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.385023  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:48.385032  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:48.385043  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:48.414019  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:48.414036  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:48.479945  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:48.479964  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.495187  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:48.495206  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:48.560181  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:48.560191  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:48.560203  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.136751  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:51.147836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:51.147914  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:51.178020  527777 cri.go:89] found id: ""
	I1201 21:15:51.178033  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.178041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:51.178046  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:51.178106  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:51.206023  527777 cri.go:89] found id: ""
	I1201 21:15:51.206036  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.206044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:51.206049  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:51.206150  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:51.236344  527777 cri.go:89] found id: ""
	I1201 21:15:51.236359  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.236366  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:51.236371  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:51.236434  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:51.262331  527777 cri.go:89] found id: ""
	I1201 21:15:51.262346  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.262353  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:51.262359  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:51.262419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:51.290923  527777 cri.go:89] found id: ""
	I1201 21:15:51.290936  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.290944  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:51.290949  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:51.291016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:51.318520  527777 cri.go:89] found id: ""
	I1201 21:15:51.318535  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.318542  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:51.318548  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:51.318607  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:51.345816  527777 cri.go:89] found id: ""
	I1201 21:15:51.345830  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.345837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:51.345845  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:51.345857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:51.361084  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:51.361100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:51.427299  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:51.427309  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:51.427320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.502906  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:51.502929  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:51.533675  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:51.533691  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.100640  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:54.111984  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:54.112047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:54.137333  527777 cri.go:89] found id: ""
	I1201 21:15:54.137347  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.137353  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:54.137360  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:54.137419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:54.166609  527777 cri.go:89] found id: ""
	I1201 21:15:54.166624  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.166635  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:54.166640  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:54.166705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:54.193412  527777 cri.go:89] found id: ""
	I1201 21:15:54.193434  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.193441  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:54.193447  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:54.193509  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:54.219156  527777 cri.go:89] found id: ""
	I1201 21:15:54.219171  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.219178  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:54.219184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:54.219241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:54.248184  527777 cri.go:89] found id: ""
	I1201 21:15:54.248197  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.248204  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:54.248210  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:54.248278  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:54.274909  527777 cri.go:89] found id: ""
	I1201 21:15:54.274923  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.274931  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:54.274936  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:54.275003  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:54.300114  527777 cri.go:89] found id: ""
	I1201 21:15:54.300128  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.300135  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:54.300143  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:54.300154  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.366293  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:54.366312  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:54.382194  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:54.382210  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:54.446526  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:54.446536  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:54.446548  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:54.525097  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:54.525120  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.056605  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:57.067114  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:57.067185  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:57.096913  527777 cri.go:89] found id: ""
	I1201 21:15:57.096926  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.096933  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:57.096939  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:57.096995  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:57.124785  527777 cri.go:89] found id: ""
	I1201 21:15:57.124799  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.124806  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:57.124812  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:57.124877  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:57.151613  527777 cri.go:89] found id: ""
	I1201 21:15:57.151628  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.151635  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:57.151640  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:57.151702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:57.181422  527777 cri.go:89] found id: ""
	I1201 21:15:57.181437  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.181445  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:57.181451  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:57.181510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:57.207775  527777 cri.go:89] found id: ""
	I1201 21:15:57.207789  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.207796  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:57.207801  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:57.207861  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:57.232906  527777 cri.go:89] found id: ""
	I1201 21:15:57.232931  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.232939  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:57.232945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:57.233016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:57.259075  527777 cri.go:89] found id: ""
	I1201 21:15:57.259100  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.259107  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:57.259115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:57.259126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.288148  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:57.288164  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:57.355525  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:57.355545  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:57.371229  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:57.371246  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:57.439767  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:57.439779  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:57.439791  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.016574  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:00.063670  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:00.063743  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:00.181922  527777 cri.go:89] found id: ""
	I1201 21:16:00.181939  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.181947  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:00.181954  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:00.183169  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:00.318653  527777 cri.go:89] found id: ""
	I1201 21:16:00.318668  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.318676  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:00.318682  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:00.318752  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:00.366365  527777 cri.go:89] found id: ""
	I1201 21:16:00.366381  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.366391  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:00.366398  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:00.366497  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:00.432333  527777 cri.go:89] found id: ""
	I1201 21:16:00.432349  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.432358  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:00.432364  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:00.432436  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:00.487199  527777 cri.go:89] found id: ""
	I1201 21:16:00.487216  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.487238  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:00.487244  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:00.487315  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:00.541398  527777 cri.go:89] found id: ""
	I1201 21:16:00.541429  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.541438  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:00.541444  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:00.541530  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:00.577064  527777 cri.go:89] found id: ""
	I1201 21:16:00.577082  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.577095  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:00.577103  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:00.577116  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:00.646395  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:00.646418  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:00.667724  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:00.667741  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:00.750849  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:00.750860  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:00.750872  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.828858  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:00.828881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.360481  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:03.371537  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:03.371611  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:03.401359  527777 cri.go:89] found id: ""
	I1201 21:16:03.401373  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.401380  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:03.401385  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:03.401452  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:03.428335  527777 cri.go:89] found id: ""
	I1201 21:16:03.428350  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.428358  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:03.428363  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:03.428424  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:03.460610  527777 cri.go:89] found id: ""
	I1201 21:16:03.460623  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.460630  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:03.460636  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:03.460695  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:03.489139  527777 cri.go:89] found id: ""
	I1201 21:16:03.489153  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.489161  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:03.489168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:03.489234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:03.519388  527777 cri.go:89] found id: ""
	I1201 21:16:03.519410  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.519418  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:03.519423  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:03.519490  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:03.549588  527777 cri.go:89] found id: ""
	I1201 21:16:03.549602  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.549610  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:03.549615  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:03.549678  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:03.576025  527777 cri.go:89] found id: ""
	I1201 21:16:03.576039  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.576047  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:03.576055  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:03.576066  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.605415  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:03.605431  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:03.675775  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:03.675797  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:03.691777  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:03.691793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:03.765238  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:03.765250  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:03.765263  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.346338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:06.356267  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:06.356325  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:06.380678  527777 cri.go:89] found id: ""
	I1201 21:16:06.380691  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.380717  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:06.380723  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:06.380780  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:06.410489  527777 cri.go:89] found id: ""
	I1201 21:16:06.410503  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.410518  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:06.410524  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:06.410588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:06.443231  527777 cri.go:89] found id: ""
	I1201 21:16:06.443250  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.443257  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:06.443263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:06.443334  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:06.468603  527777 cri.go:89] found id: ""
	I1201 21:16:06.468618  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.468625  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:06.468631  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:06.468700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:06.493128  527777 cri.go:89] found id: ""
	I1201 21:16:06.493141  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.493148  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:06.493154  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:06.493212  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:06.518860  527777 cri.go:89] found id: ""
	I1201 21:16:06.518874  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.518881  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:06.518886  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:06.518958  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:06.545817  527777 cri.go:89] found id: ""
	I1201 21:16:06.545831  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.545839  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:06.545846  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:06.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:06.610356  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:06.610378  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:06.625472  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:06.625488  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:06.722623  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:06.722633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:06.722648  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.798208  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:06.798228  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.328391  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:09.339639  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:09.339706  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:09.368398  527777 cri.go:89] found id: ""
	I1201 21:16:09.368421  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.368428  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:09.368434  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:09.368512  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:09.398525  527777 cri.go:89] found id: ""
	I1201 21:16:09.398540  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.398548  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:09.398553  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:09.398615  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:09.426105  527777 cri.go:89] found id: ""
	I1201 21:16:09.426121  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.426129  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:09.426145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:09.426205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:09.456433  527777 cri.go:89] found id: ""
	I1201 21:16:09.456449  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.456456  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:09.456462  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:09.456525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:09.488473  527777 cri.go:89] found id: ""
	I1201 21:16:09.488488  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.488495  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:09.488503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:09.488563  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:09.514937  527777 cri.go:89] found id: ""
	I1201 21:16:09.514951  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.514958  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:09.514964  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:09.515027  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:09.545815  527777 cri.go:89] found id: ""
	I1201 21:16:09.545829  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.545837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:09.545845  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:09.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.575097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:09.575115  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:09.642216  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:09.642237  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:09.663629  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:09.663645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:09.745863  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:09.745876  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:09.745888  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.327853  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:12.338928  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:12.338992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:12.372550  527777 cri.go:89] found id: ""
	I1201 21:16:12.372583  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.372591  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:12.372597  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:12.372662  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:12.402760  527777 cri.go:89] found id: ""
	I1201 21:16:12.402776  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.402784  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:12.402790  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:12.402851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:12.429193  527777 cri.go:89] found id: ""
	I1201 21:16:12.429208  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.429215  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:12.429221  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:12.429286  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:12.456952  527777 cri.go:89] found id: ""
	I1201 21:16:12.456966  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.456973  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:12.456978  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:12.457037  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:12.483859  527777 cri.go:89] found id: ""
	I1201 21:16:12.483874  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.483881  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:12.483887  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:12.483950  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:12.510218  527777 cri.go:89] found id: ""
	I1201 21:16:12.510234  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.510242  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:12.510248  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:12.510323  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:12.536841  527777 cri.go:89] found id: ""
	I1201 21:16:12.536856  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.536864  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:12.536871  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:12.536881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.612682  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:12.612702  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:12.641218  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:12.641235  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:12.719908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:12.719930  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:12.736058  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:12.736077  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:12.803643  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.304417  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:15.314647  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:15.314707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:15.342468  527777 cri.go:89] found id: ""
	I1201 21:16:15.342483  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.342491  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:15.342497  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:15.342559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:15.369048  527777 cri.go:89] found id: ""
	I1201 21:16:15.369063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.369071  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:15.369077  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:15.369140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:15.393869  527777 cri.go:89] found id: ""
	I1201 21:16:15.393884  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.393891  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:15.393897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:15.393960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:15.420049  527777 cri.go:89] found id: ""
	I1201 21:16:15.420063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.420071  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:15.420077  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:15.420136  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:15.450112  527777 cri.go:89] found id: ""
	I1201 21:16:15.450126  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.450134  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:15.450140  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:15.450201  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:15.475788  527777 cri.go:89] found id: ""
	I1201 21:16:15.475803  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.475811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:15.475884  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:15.502058  527777 cri.go:89] found id: ""
	I1201 21:16:15.502072  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.502084  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:15.502092  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:15.502102  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:15.535936  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:15.535953  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:15.601548  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:15.601568  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:15.617150  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:15.617167  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:15.694491  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.694502  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:15.694514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.282089  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:18.292620  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:18.292687  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:18.320483  527777 cri.go:89] found id: ""
	I1201 21:16:18.320497  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.320504  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:18.320510  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:18.320569  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:18.346376  527777 cri.go:89] found id: ""
	I1201 21:16:18.346389  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.346397  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:18.346402  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:18.346459  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:18.377534  527777 cri.go:89] found id: ""
	I1201 21:16:18.377549  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.377557  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:18.377562  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:18.377619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:18.402867  527777 cri.go:89] found id: ""
	I1201 21:16:18.402882  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.402892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:18.402897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:18.402952  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:18.429104  527777 cri.go:89] found id: ""
	I1201 21:16:18.429119  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.429126  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:18.429132  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:18.429193  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:18.455237  527777 cri.go:89] found id: ""
	I1201 21:16:18.455251  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.455257  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:18.455263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:18.455330  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:18.480176  527777 cri.go:89] found id: ""
	I1201 21:16:18.480190  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.480197  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:18.480205  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:18.480215  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.554692  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:18.554713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:18.586044  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:18.586062  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:18.654056  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:18.654076  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:18.670115  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:18.670131  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:18.739729  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.240925  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:21.251332  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:21.251400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:21.277213  527777 cri.go:89] found id: ""
	I1201 21:16:21.277228  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.277266  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:21.277275  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:21.277349  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:21.304294  527777 cri.go:89] found id: ""
	I1201 21:16:21.304308  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.304316  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:21.304321  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:21.304393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:21.331354  527777 cri.go:89] found id: ""
	I1201 21:16:21.331369  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.331377  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:21.331382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:21.331455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:21.358548  527777 cri.go:89] found id: ""
	I1201 21:16:21.358563  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.358571  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:21.358577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:21.358637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:21.384228  527777 cri.go:89] found id: ""
	I1201 21:16:21.384242  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.384250  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:21.384255  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:21.384321  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:21.413560  527777 cri.go:89] found id: ""
	I1201 21:16:21.413574  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.413581  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:21.413587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:21.413647  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:21.439790  527777 cri.go:89] found id: ""
	I1201 21:16:21.439805  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.439813  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:21.439821  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:21.439839  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:21.505587  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:21.505607  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:21.522038  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:21.522064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:21.590692  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.590718  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:21.590730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:21.667703  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:21.667727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.203209  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:24.214159  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:24.214230  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:24.242378  527777 cri.go:89] found id: ""
	I1201 21:16:24.242392  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.242399  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:24.242405  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:24.242486  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:24.269017  527777 cri.go:89] found id: ""
	I1201 21:16:24.269032  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.269039  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:24.269045  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:24.269103  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:24.295927  527777 cri.go:89] found id: ""
	I1201 21:16:24.295942  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.295949  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:24.295955  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:24.296019  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:24.321917  527777 cri.go:89] found id: ""
	I1201 21:16:24.321932  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.321939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:24.321944  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:24.322012  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:24.350147  527777 cri.go:89] found id: ""
	I1201 21:16:24.350163  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.350171  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:24.350177  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:24.350250  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:24.376131  527777 cri.go:89] found id: ""
	I1201 21:16:24.376145  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.376153  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:24.376160  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:24.376220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:24.403024  527777 cri.go:89] found id: ""
	I1201 21:16:24.403039  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.403046  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:24.403055  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:24.403068  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:24.418212  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:24.418230  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:24.486448  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:24.486460  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:24.486472  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:24.563285  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:24.563307  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.597003  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:24.597023  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.167466  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:27.179061  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:27.179139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:27.210380  527777 cri.go:89] found id: ""
	I1201 21:16:27.210394  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.210402  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:27.210409  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:27.210474  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:27.238732  527777 cri.go:89] found id: ""
	I1201 21:16:27.238747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.238754  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:27.238760  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:27.238827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:27.265636  527777 cri.go:89] found id: ""
	I1201 21:16:27.265652  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.265661  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:27.265667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:27.265736  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:27.292213  527777 cri.go:89] found id: ""
	I1201 21:16:27.292228  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.292235  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:27.292241  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:27.292300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:27.324732  527777 cri.go:89] found id: ""
	I1201 21:16:27.324747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.324755  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:27.324762  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:27.324827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:27.352484  527777 cri.go:89] found id: ""
	I1201 21:16:27.352499  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.352507  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:27.352513  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:27.352590  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:27.384113  527777 cri.go:89] found id: ""
	I1201 21:16:27.384128  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.384136  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:27.384144  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:27.384155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:27.415615  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:27.415634  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.482296  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:27.482319  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:27.498829  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:27.498846  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:27.569732  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:27.569744  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:27.569757  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.145371  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:30.156840  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:30.156922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:30.184704  527777 cri.go:89] found id: ""
	I1201 21:16:30.184719  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.184727  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:30.184733  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:30.184795  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:30.213086  527777 cri.go:89] found id: ""
	I1201 21:16:30.213110  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.213120  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:30.213125  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:30.213192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:30.245472  527777 cri.go:89] found id: ""
	I1201 21:16:30.245486  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.245494  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:30.245499  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:30.245565  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:30.273463  527777 cri.go:89] found id: ""
	I1201 21:16:30.273477  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.273485  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:30.273491  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:30.273557  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:30.302141  527777 cri.go:89] found id: ""
	I1201 21:16:30.302156  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.302164  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:30.302170  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:30.302232  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:30.329744  527777 cri.go:89] found id: ""
	I1201 21:16:30.329758  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.329765  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:30.329771  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:30.329833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:30.356049  527777 cri.go:89] found id: ""
	I1201 21:16:30.356063  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.356071  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:30.356079  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:30.356110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:30.424124  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:30.424134  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:30.424145  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.498989  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:30.499009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:30.536189  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:30.536208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:30.601111  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:30.601130  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.116248  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:33.129790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:33.129876  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:33.162072  527777 cri.go:89] found id: ""
	I1201 21:16:33.162085  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.162093  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:33.162098  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:33.162168  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:33.188853  527777 cri.go:89] found id: ""
	I1201 21:16:33.188868  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.188875  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:33.188881  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:33.188944  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:33.215527  527777 cri.go:89] found id: ""
	I1201 21:16:33.215541  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.215548  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:33.215554  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:33.215613  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:33.241336  527777 cri.go:89] found id: ""
	I1201 21:16:33.241350  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.241357  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:33.241363  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:33.241422  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:33.267551  527777 cri.go:89] found id: ""
	I1201 21:16:33.267564  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.267571  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:33.267576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:33.267639  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:33.293257  527777 cri.go:89] found id: ""
	I1201 21:16:33.293273  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.293280  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:33.293286  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:33.293346  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:33.324702  527777 cri.go:89] found id: ""
	I1201 21:16:33.324717  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.324725  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:33.324733  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:33.324745  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:33.393448  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:33.393473  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.409048  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:33.409075  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:33.473709  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:33.473720  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:33.473731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:33.549174  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:33.549194  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:36.083124  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:36.093860  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:36.093919  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:36.122911  527777 cri.go:89] found id: ""
	I1201 21:16:36.122925  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.122932  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:36.122938  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:36.123000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:36.148002  527777 cri.go:89] found id: ""
	I1201 21:16:36.148016  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.148023  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:36.148028  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:36.148088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:36.173008  527777 cri.go:89] found id: ""
	I1201 21:16:36.173022  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.173029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:36.173034  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:36.173092  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:36.198828  527777 cri.go:89] found id: ""
	I1201 21:16:36.198841  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.198848  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:36.198854  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:36.198909  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:36.224001  527777 cri.go:89] found id: ""
	I1201 21:16:36.224015  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.224022  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:36.224027  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:36.224085  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:36.249054  527777 cri.go:89] found id: ""
	I1201 21:16:36.249068  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.249075  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:36.249080  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:36.249140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:36.273000  527777 cri.go:89] found id: ""
	I1201 21:16:36.273014  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.273021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:36.273029  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:36.273039  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:36.337502  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:36.337521  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:36.353315  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:36.353331  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:36.424612  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:36.424623  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:36.424633  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:36.503070  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:36.503100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:39.034568  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:39.045696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:39.045760  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:39.071542  527777 cri.go:89] found id: ""
	I1201 21:16:39.071555  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.071563  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:39.071569  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:39.071630  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:39.102301  527777 cri.go:89] found id: ""
	I1201 21:16:39.102315  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.102322  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:39.102328  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:39.102384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:39.129808  527777 cri.go:89] found id: ""
	I1201 21:16:39.129823  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.129830  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:39.129836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:39.129895  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:39.155555  527777 cri.go:89] found id: ""
	I1201 21:16:39.155569  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.155576  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:39.155582  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:39.155650  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:39.186394  527777 cri.go:89] found id: ""
	I1201 21:16:39.186408  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.186415  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:39.186420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:39.186485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:39.213875  527777 cri.go:89] found id: ""
	I1201 21:16:39.213889  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.213896  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:39.213901  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:39.213957  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:39.243609  527777 cri.go:89] found id: ""
	I1201 21:16:39.243623  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.243631  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:39.243640  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:39.243652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:39.307878  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:39.307897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:39.322972  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:39.322989  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:39.391843  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:39.391853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:39.391869  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:39.471894  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:39.471915  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.007008  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:42.029520  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:42.029588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:42.057505  527777 cri.go:89] found id: ""
	I1201 21:16:42.057520  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.057528  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:42.057534  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:42.057598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:42.097060  527777 cri.go:89] found id: ""
	I1201 21:16:42.097086  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.097094  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:42.097100  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:42.097191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:42.136029  527777 cri.go:89] found id: ""
	I1201 21:16:42.136048  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.136058  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:42.136064  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:42.136155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:42.183711  527777 cri.go:89] found id: ""
	I1201 21:16:42.183733  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.183743  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:42.183750  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:42.183825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:42.219282  527777 cri.go:89] found id: ""
	I1201 21:16:42.219298  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.219320  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:42.219326  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:42.219393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:42.248969  527777 cri.go:89] found id: ""
	I1201 21:16:42.248986  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.248994  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:42.249005  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:42.249079  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:42.283438  527777 cri.go:89] found id: ""
	I1201 21:16:42.283452  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.283459  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:42.283467  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:42.283479  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:42.355657  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:42.355675  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:42.355686  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:42.432138  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:42.432158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.466460  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:42.466475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:42.532633  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:42.532653  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.050487  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:45.077310  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:45.077404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:45.125431  527777 cri.go:89] found id: ""
	I1201 21:16:45.125455  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.125463  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:45.125469  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:45.125541  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:45.159113  527777 cri.go:89] found id: ""
	I1201 21:16:45.159151  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.159161  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:45.159167  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:45.159238  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:45.205059  527777 cri.go:89] found id: ""
	I1201 21:16:45.205075  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.205084  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:45.205092  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:45.205213  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:45.256952  527777 cri.go:89] found id: ""
	I1201 21:16:45.257035  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.257044  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:45.257051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:45.257244  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:45.299953  527777 cri.go:89] found id: ""
	I1201 21:16:45.299967  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.299975  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:45.299981  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:45.300047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:45.334546  527777 cri.go:89] found id: ""
	I1201 21:16:45.334562  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.334570  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:45.334576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:45.334641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:45.366922  527777 cri.go:89] found id: ""
	I1201 21:16:45.366936  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.366944  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:45.366952  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:45.366973  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.384985  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:45.385003  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:45.455424  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:45.455434  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:45.455446  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:45.532668  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:45.532689  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:45.572075  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:45.572092  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.147493  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:48.158252  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:48.158331  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:48.185671  527777 cri.go:89] found id: ""
	I1201 21:16:48.185685  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.185692  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:48.185697  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:48.185766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:48.211977  527777 cri.go:89] found id: ""
	I1201 21:16:48.211991  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.211998  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:48.212003  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:48.212059  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:48.238605  527777 cri.go:89] found id: ""
	I1201 21:16:48.238620  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.238627  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:48.238632  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:48.238691  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:48.272407  527777 cri.go:89] found id: ""
	I1201 21:16:48.272421  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.272428  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:48.272433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:48.272491  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:48.300451  527777 cri.go:89] found id: ""
	I1201 21:16:48.300465  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.300472  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:48.300478  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:48.300543  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:48.326518  527777 cri.go:89] found id: ""
	I1201 21:16:48.326542  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.326550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:48.326555  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:48.326629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:48.353027  527777 cri.go:89] found id: ""
	I1201 21:16:48.353043  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.353050  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:48.353059  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:48.353070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.418908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:48.418928  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:48.435338  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:48.435358  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:48.502670  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:48.502708  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:48.502718  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:48.579198  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:48.579219  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.111632  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:51.122895  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:51.122970  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:51.149845  527777 cri.go:89] found id: ""
	I1201 21:16:51.149859  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.149867  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:51.149872  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:51.149937  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:51.182385  527777 cri.go:89] found id: ""
	I1201 21:16:51.182399  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.182406  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:51.182411  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:51.182473  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:51.207954  527777 cri.go:89] found id: ""
	I1201 21:16:51.207967  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.208015  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:51.208024  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:51.208080  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:51.233058  527777 cri.go:89] found id: ""
	I1201 21:16:51.233071  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.233077  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:51.233083  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:51.233146  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:51.259105  527777 cri.go:89] found id: ""
	I1201 21:16:51.259119  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.259127  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:51.259147  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:51.259205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:51.284870  527777 cri.go:89] found id: ""
	I1201 21:16:51.284884  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.284891  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:51.284896  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:51.284953  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:51.312084  527777 cri.go:89] found id: ""
	I1201 21:16:51.312099  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.312106  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:51.312115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:51.312126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.342115  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:51.342134  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:51.408816  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:51.408836  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:51.425032  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:51.425054  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:51.494088  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:51.494097  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:51.494107  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.070393  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:54.082393  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:54.082464  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:54.112007  527777 cri.go:89] found id: ""
	I1201 21:16:54.112033  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.112041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:54.112048  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:54.112120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:54.142629  527777 cri.go:89] found id: ""
	I1201 21:16:54.142643  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.142650  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:54.142656  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:54.142715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:54.170596  527777 cri.go:89] found id: ""
	I1201 21:16:54.170611  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.170618  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:54.170623  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:54.170685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:54.199276  527777 cri.go:89] found id: ""
	I1201 21:16:54.199301  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.199309  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:54.199314  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:54.199385  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:54.229268  527777 cri.go:89] found id: ""
	I1201 21:16:54.229285  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.229294  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:54.229300  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:54.229378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:54.261273  527777 cri.go:89] found id: ""
	I1201 21:16:54.261289  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.261298  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:54.261306  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:54.261409  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:54.289154  527777 cri.go:89] found id: ""
	I1201 21:16:54.289169  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.289189  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:54.289199  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:54.289216  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:54.363048  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:54.363059  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:54.363070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.440875  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:54.440897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:54.471338  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:54.471355  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:54.543810  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:54.543830  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.061388  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:57.071929  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:57.071998  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:57.102516  527777 cri.go:89] found id: ""
	I1201 21:16:57.102531  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.102540  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:57.102546  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:57.102614  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:57.129734  527777 cri.go:89] found id: ""
	I1201 21:16:57.129749  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.129756  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:57.129761  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:57.129825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:57.160948  527777 cri.go:89] found id: ""
	I1201 21:16:57.160962  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.160971  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:57.160977  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:57.161049  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:57.192059  527777 cri.go:89] found id: ""
	I1201 21:16:57.192075  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.192082  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:57.192088  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:57.192155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:57.217906  527777 cri.go:89] found id: ""
	I1201 21:16:57.217920  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.217927  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:57.217932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:57.217992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:57.246391  527777 cri.go:89] found id: ""
	I1201 21:16:57.246406  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.246414  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:57.246420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:57.246480  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:57.273534  527777 cri.go:89] found id: ""
	I1201 21:16:57.273558  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.273565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:57.273573  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:57.273585  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:57.338589  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:57.338609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.354225  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:57.354241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:57.425192  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:57.425202  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:57.425213  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:57.501690  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:57.501713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:00.031846  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:00.071974  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:00.072071  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:00.158888  527777 cri.go:89] found id: ""
	I1201 21:17:00.158904  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.158912  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:00.158918  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:00.158994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:00.267283  527777 cri.go:89] found id: ""
	I1201 21:17:00.267299  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.267306  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:00.267312  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:00.267395  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:00.331710  527777 cri.go:89] found id: ""
	I1201 21:17:00.331725  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.331733  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:00.331740  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:00.331821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:00.416435  527777 cri.go:89] found id: ""
	I1201 21:17:00.416468  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.416476  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:00.416482  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:00.416566  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:00.456878  527777 cri.go:89] found id: ""
	I1201 21:17:00.456894  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.456904  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:00.456909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:00.456979  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:00.511096  527777 cri.go:89] found id: ""
	I1201 21:17:00.511113  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.511122  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:00.511166  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:00.511245  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:00.565444  527777 cri.go:89] found id: ""
	I1201 21:17:00.565463  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.565471  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:00.565480  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:00.565498  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:00.641086  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:00.641121  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:00.662045  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:00.662064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:00.750234  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:00.750246  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:00.750258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:00.828511  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:00.828539  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:03.366405  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:03.379053  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:03.379127  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:03.412977  527777 cri.go:89] found id: ""
	I1201 21:17:03.412991  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.412999  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:03.413005  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:03.413074  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:03.442789  527777 cri.go:89] found id: ""
	I1201 21:17:03.442817  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.442827  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:03.442834  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:03.442956  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:03.472731  527777 cri.go:89] found id: ""
	I1201 21:17:03.472758  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.472767  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:03.472772  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:03.472843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:03.503719  527777 cri.go:89] found id: ""
	I1201 21:17:03.503735  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.503744  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:03.503751  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:03.503823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:03.533642  527777 cri.go:89] found id: ""
	I1201 21:17:03.533658  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.533665  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:03.533671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:03.533749  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:03.562889  527777 cri.go:89] found id: ""
	I1201 21:17:03.562908  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.562916  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:03.562922  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:03.563006  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:03.592257  527777 cri.go:89] found id: ""
	I1201 21:17:03.592275  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.592283  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:03.592291  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:03.592303  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:03.660263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:03.660282  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:03.683357  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:03.683375  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:03.765695  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:03.765707  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:03.765719  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:03.842543  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:03.842567  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.376185  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:06.387932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:06.388000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:06.417036  527777 cri.go:89] found id: ""
	I1201 21:17:06.417050  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.417058  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:06.417064  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:06.417125  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:06.447064  527777 cri.go:89] found id: ""
	I1201 21:17:06.447090  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.447098  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:06.447104  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:06.447207  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:06.476879  527777 cri.go:89] found id: ""
	I1201 21:17:06.476893  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.476900  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:06.476905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:06.476968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:06.506320  527777 cri.go:89] found id: ""
	I1201 21:17:06.506338  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.506346  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:06.506352  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:06.506419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:06.535420  527777 cri.go:89] found id: ""
	I1201 21:17:06.535443  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.535451  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:06.535458  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:06.535525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:06.563751  527777 cri.go:89] found id: ""
	I1201 21:17:06.563784  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.563792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:06.563798  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:06.563865  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:06.597779  527777 cri.go:89] found id: ""
	I1201 21:17:06.597795  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.597803  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:06.597811  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:06.597823  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:06.681458  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:06.681470  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:06.681482  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:06.778343  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:06.778369  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.812835  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:06.812854  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:06.886097  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:06.886123  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.404611  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:09.415307  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:09.415386  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:09.454145  527777 cri.go:89] found id: ""
	I1201 21:17:09.454159  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.454168  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:09.454174  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:09.454240  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:09.483869  527777 cri.go:89] found id: ""
	I1201 21:17:09.483885  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.483893  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:09.483899  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:09.483961  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:09.510637  527777 cri.go:89] found id: ""
	I1201 21:17:09.510650  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.510657  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:09.510662  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:09.510719  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:09.542823  527777 cri.go:89] found id: ""
	I1201 21:17:09.542837  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.542844  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:09.542849  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:09.542911  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:09.570165  527777 cri.go:89] found id: ""
	I1201 21:17:09.570184  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.570191  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:09.570196  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:09.570254  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:09.595630  527777 cri.go:89] found id: ""
	I1201 21:17:09.595645  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.595652  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:09.595658  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:09.595722  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:09.621205  527777 cri.go:89] found id: ""
	I1201 21:17:09.621219  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.621226  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:09.621234  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:09.621244  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:09.700160  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:09.700182  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:09.739401  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:09.739425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:09.809572  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:09.809594  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.828869  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:09.828886  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:09.920701  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.421012  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:12.432213  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:12.432287  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:12.459734  527777 cri.go:89] found id: ""
	I1201 21:17:12.459757  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.459765  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:12.459771  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:12.459840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:12.485671  527777 cri.go:89] found id: ""
	I1201 21:17:12.485685  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.485692  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:12.485698  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:12.485757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:12.511548  527777 cri.go:89] found id: ""
	I1201 21:17:12.511564  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.511572  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:12.511577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:12.511637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:12.542030  527777 cri.go:89] found id: ""
	I1201 21:17:12.542046  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.542053  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:12.542060  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:12.542120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:12.567661  527777 cri.go:89] found id: ""
	I1201 21:17:12.567675  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.567691  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:12.567696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:12.567766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:12.597625  527777 cri.go:89] found id: ""
	I1201 21:17:12.597640  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.597647  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:12.597653  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:12.597718  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:12.623694  527777 cri.go:89] found id: ""
	I1201 21:17:12.623708  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.623715  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:12.623722  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:12.623733  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:12.638757  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:12.638772  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:12.731591  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.731601  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:12.731612  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:12.808720  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:12.808739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:12.838448  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:12.838465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:15.411670  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:15.422227  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:15.422288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:15.449244  527777 cri.go:89] found id: ""
	I1201 21:17:15.449267  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.449275  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:15.449281  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:15.449351  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:15.475790  527777 cri.go:89] found id: ""
	I1201 21:17:15.475804  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.475812  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:15.475883  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:15.505030  527777 cri.go:89] found id: ""
	I1201 21:17:15.505044  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.505052  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:15.505057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:15.505121  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:15.535702  527777 cri.go:89] found id: ""
	I1201 21:17:15.535717  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.535726  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:15.535732  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:15.535802  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:15.561881  527777 cri.go:89] found id: ""
	I1201 21:17:15.561895  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.561903  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:15.561909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:15.561968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:15.589608  527777 cri.go:89] found id: ""
	I1201 21:17:15.589623  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.589631  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:15.589637  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:15.589704  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:15.617545  527777 cri.go:89] found id: ""
	I1201 21:17:15.617559  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.617565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:15.617573  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:15.617584  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:15.633049  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:15.633067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:15.719603  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:15.719617  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:15.719628  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:15.795783  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:15.795806  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:15.829611  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:15.829629  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.397343  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:18.407645  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:18.407707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:18.431992  527777 cri.go:89] found id: ""
	I1201 21:17:18.432013  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.432020  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:18.432025  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:18.432082  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:18.456900  527777 cri.go:89] found id: ""
	I1201 21:17:18.456914  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.456921  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:18.456927  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:18.456985  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:18.482130  527777 cri.go:89] found id: ""
	I1201 21:17:18.482144  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.482151  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:18.482156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:18.482216  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:18.506788  527777 cri.go:89] found id: ""
	I1201 21:17:18.506802  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.506809  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:18.506814  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:18.506880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:18.535015  527777 cri.go:89] found id: ""
	I1201 21:17:18.535029  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.535036  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:18.535041  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:18.535102  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:18.561266  527777 cri.go:89] found id: ""
	I1201 21:17:18.561281  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.561288  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:18.561294  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:18.561350  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:18.590006  527777 cri.go:89] found id: ""
	I1201 21:17:18.590020  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.590027  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:18.590034  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:18.590044  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.655626  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:18.655644  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:18.673142  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:18.673158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:18.755072  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:18.755084  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:18.755097  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:18.830997  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:18.831019  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:21.361828  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:21.372633  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:21.372693  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:21.397967  527777 cri.go:89] found id: ""
	I1201 21:17:21.397981  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.398009  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:21.398014  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:21.398083  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:21.424540  527777 cri.go:89] found id: ""
	I1201 21:17:21.424554  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.424570  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:21.424575  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:21.424644  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:21.450905  527777 cri.go:89] found id: ""
	I1201 21:17:21.450920  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.450948  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:21.450954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:21.451029  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:21.483885  527777 cri.go:89] found id: ""
	I1201 21:17:21.483899  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.483906  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:21.483911  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:21.483966  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:21.514135  527777 cri.go:89] found id: ""
	I1201 21:17:21.514149  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.514156  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:21.514162  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:21.514221  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:21.540203  527777 cri.go:89] found id: ""
	I1201 21:17:21.540217  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.540224  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:21.540229  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:21.540285  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:21.570752  527777 cri.go:89] found id: ""
	I1201 21:17:21.570765  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.570772  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:21.570780  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:21.570794  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:21.636631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:21.636651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:21.652498  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:21.652516  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:21.739586  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:21.739597  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:21.739609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:21.815773  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:21.815793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:24.351500  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:24.361669  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:24.361728  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:24.390941  527777 cri.go:89] found id: ""
	I1201 21:17:24.390955  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.390962  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:24.390968  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:24.391024  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:24.416426  527777 cri.go:89] found id: ""
	I1201 21:17:24.416440  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.416448  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:24.416453  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:24.416510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:24.443044  527777 cri.go:89] found id: ""
	I1201 21:17:24.443058  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.443065  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:24.443070  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:24.443182  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:24.468754  527777 cri.go:89] found id: ""
	I1201 21:17:24.468769  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.468776  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:24.468781  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:24.468840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:24.494385  527777 cri.go:89] found id: ""
	I1201 21:17:24.494399  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.494406  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:24.494416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:24.494477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:24.519676  527777 cri.go:89] found id: ""
	I1201 21:17:24.519689  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.519696  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:24.519702  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:24.519761  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:24.546000  527777 cri.go:89] found id: ""
	I1201 21:17:24.546014  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.546021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:24.546028  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:24.546041  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:24.611509  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:24.611529  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:24.626295  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:24.626324  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:24.702708  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:24.702719  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:24.702731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:24.784492  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:24.784514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.320817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:27.331542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:27.331602  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:27.357014  527777 cri.go:89] found id: ""
	I1201 21:17:27.357028  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.357035  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:27.357040  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:27.357098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:27.381792  527777 cri.go:89] found id: ""
	I1201 21:17:27.381806  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.381813  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:27.381818  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:27.381880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:27.407905  527777 cri.go:89] found id: ""
	I1201 21:17:27.407919  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.407927  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:27.407933  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:27.407994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:27.433511  527777 cri.go:89] found id: ""
	I1201 21:17:27.433526  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.433533  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:27.433539  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:27.433596  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:27.459609  527777 cri.go:89] found id: ""
	I1201 21:17:27.459622  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.459629  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:27.459635  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:27.459700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:27.487173  527777 cri.go:89] found id: ""
	I1201 21:17:27.487186  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.487193  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:27.487199  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:27.487257  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:27.512860  527777 cri.go:89] found id: ""
	I1201 21:17:27.512874  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.512881  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:27.512889  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:27.512901  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.541723  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:27.541739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:27.606990  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:27.607009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:27.622689  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:27.622705  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:27.700563  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:27.700573  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:27.700586  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.289250  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:30.300157  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:30.300217  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:30.327373  527777 cri.go:89] found id: ""
	I1201 21:17:30.327394  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.327405  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:30.327420  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:30.327492  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:30.353615  527777 cri.go:89] found id: ""
	I1201 21:17:30.353629  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.353636  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:30.353642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:30.353702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:30.385214  527777 cri.go:89] found id: ""
	I1201 21:17:30.385228  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.385235  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:30.385240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:30.385300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:30.415674  527777 cri.go:89] found id: ""
	I1201 21:17:30.415688  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.415695  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:30.415701  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:30.415767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:30.442641  527777 cri.go:89] found id: ""
	I1201 21:17:30.442656  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.442663  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:30.442668  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:30.442726  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:30.469997  527777 cri.go:89] found id: ""
	I1201 21:17:30.470010  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.470017  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:30.470023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:30.470081  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:30.495554  527777 cri.go:89] found id: ""
	I1201 21:17:30.495570  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.495579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:30.495587  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:30.495599  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:30.559878  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:30.559888  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:30.559899  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.635560  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:30.635581  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:30.673666  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:30.673682  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:30.747787  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:30.747808  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.264623  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:33.276366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:33.276427  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:33.306447  527777 cri.go:89] found id: ""
	I1201 21:17:33.306461  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.306473  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:33.306478  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:33.306538  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:33.334715  527777 cri.go:89] found id: ""
	I1201 21:17:33.334730  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.334738  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:33.334744  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:33.334814  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:33.365674  527777 cri.go:89] found id: ""
	I1201 21:17:33.365690  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.365698  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:33.365705  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:33.365774  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:33.396072  527777 cri.go:89] found id: ""
	I1201 21:17:33.396089  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.396096  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:33.396103  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:33.396175  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:33.429356  527777 cri.go:89] found id: ""
	I1201 21:17:33.429372  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.429381  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:33.429387  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:33.429461  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:33.457917  527777 cri.go:89] found id: ""
	I1201 21:17:33.457932  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.457941  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:33.457948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:33.458022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:33.490167  527777 cri.go:89] found id: ""
	I1201 21:17:33.490182  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.490190  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:33.490199  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:33.490212  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:33.558131  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:33.558155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.575080  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:33.575101  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:33.657808  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:33.657834  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:33.657848  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:33.754296  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:33.754323  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:36.289647  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:36.300774  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:36.300833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:36.327492  527777 cri.go:89] found id: ""
	I1201 21:17:36.327507  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.327514  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:36.327520  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:36.327583  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:36.359515  527777 cri.go:89] found id: ""
	I1201 21:17:36.359529  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.359537  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:36.359542  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:36.359606  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:36.387977  527777 cri.go:89] found id: ""
	I1201 21:17:36.387990  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.387997  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:36.388002  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:36.388058  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:36.413410  527777 cri.go:89] found id: ""
	I1201 21:17:36.413429  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.413436  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:36.413442  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:36.413499  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:36.440588  527777 cri.go:89] found id: ""
	I1201 21:17:36.440614  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.440622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:36.440627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:36.440698  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:36.471404  527777 cri.go:89] found id: ""
	I1201 21:17:36.471419  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.471427  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:36.471433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:36.471500  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:36.499502  527777 cri.go:89] found id: ""
	I1201 21:17:36.499518  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.499528  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:36.499536  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:36.499546  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:36.568027  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:36.568052  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:36.584561  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:36.584580  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:36.665718  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:36.665728  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:36.665740  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:36.748791  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:36.748812  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.285189  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:39.296369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:39.296438  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:39.323280  527777 cri.go:89] found id: ""
	I1201 21:17:39.323294  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.323306  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:39.323312  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:39.323379  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:39.352092  527777 cri.go:89] found id: ""
	I1201 21:17:39.352107  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.352115  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:39.352120  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:39.352187  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:39.379352  527777 cri.go:89] found id: ""
	I1201 21:17:39.379367  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.379375  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:39.379382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:39.379446  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:39.406925  527777 cri.go:89] found id: ""
	I1201 21:17:39.406940  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.406947  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:39.406954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:39.407022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:39.434427  527777 cri.go:89] found id: ""
	I1201 21:17:39.434442  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.434450  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:39.434455  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:39.434521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:39.466725  527777 cri.go:89] found id: ""
	I1201 21:17:39.466741  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.466748  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:39.466755  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:39.466821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:39.494952  527777 cri.go:89] found id: ""
	I1201 21:17:39.494968  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.494976  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:39.494985  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:39.494998  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:39.510984  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:39.511002  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:39.585968  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:39.585981  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:39.585993  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:39.669009  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:39.669033  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.705170  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:39.705189  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:42.275450  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:42.287572  527777 kubeadm.go:602] duration metric: took 4m1.888207918s to restartPrimaryControlPlane
	W1201 21:17:42.287658  527777 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1201 21:17:42.287747  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:17:42.711674  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:17:42.725511  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:17:42.734239  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:17:42.734308  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:17:42.743050  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:17:42.743060  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:17:42.743120  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:17:42.751678  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:17:42.751731  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:17:42.759481  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:17:42.767903  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:17:42.767964  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:17:42.776067  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.784283  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:17:42.784355  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.792582  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:17:42.801449  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:17:42.801518  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:17:42.809783  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:17:42.849635  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:17:42.849689  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:17:42.929073  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:17:42.929165  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:17:42.929199  527777 kubeadm.go:319] OS: Linux
	I1201 21:17:42.929243  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:17:42.929296  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:17:42.929342  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:17:42.929388  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:17:42.929435  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:17:42.929482  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:17:42.929526  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:17:42.929573  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:17:42.929617  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:17:43.002025  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:17:43.002165  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:17:43.002258  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:17:43.013458  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:17:43.017000  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:17:43.017095  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:17:43.017170  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:17:43.017252  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:17:43.017311  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:17:43.017379  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:17:43.017434  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:17:43.017501  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:17:43.017561  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:17:43.017634  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:17:43.017705  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:17:43.017832  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:17:43.017892  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:17:43.133992  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:17:43.467350  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:17:43.613021  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:17:43.910424  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:17:44.196121  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:17:44.196632  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:17:44.199145  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:17:44.202480  527777 out.go:252]   - Booting up control plane ...
	I1201 21:17:44.202575  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:17:44.202651  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:17:44.202718  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:17:44.217388  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:17:44.217714  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:17:44.228031  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:17:44.228400  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:17:44.228517  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:17:44.357408  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:17:44.357522  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:21:44.357404  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000240491s
	I1201 21:21:44.357429  527777 kubeadm.go:319] 
	I1201 21:21:44.357487  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:21:44.357523  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:21:44.357633  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:21:44.357637  527777 kubeadm.go:319] 
	I1201 21:21:44.357830  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:21:44.357863  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:21:44.357893  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:21:44.357896  527777 kubeadm.go:319] 
	I1201 21:21:44.361511  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.361943  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:44.362051  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:21:44.362287  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:21:44.362292  527777 kubeadm.go:319] 
	I1201 21:21:44.362361  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1201 21:21:44.362491  527777 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240491s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1201 21:21:44.362579  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:21:44.772977  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:21:44.786214  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:21:44.786270  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:21:44.794556  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:21:44.794568  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:21:44.794622  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:21:44.803048  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:21:44.803106  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:21:44.810695  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:21:44.818882  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:21:44.818947  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:21:44.827077  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.834936  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:21:44.834995  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.843074  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:21:44.851084  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:21:44.851166  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:21:44.858721  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:21:44.981319  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.981788  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:45.157392  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:25:46.243317  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:25:46.243344  527777 kubeadm.go:319] 
	I1201 21:25:46.243413  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 21:25:46.246817  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:25:46.246871  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:25:46.246962  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:25:46.247022  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:25:46.247057  527777 kubeadm.go:319] OS: Linux
	I1201 21:25:46.247100  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:25:46.247175  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:25:46.247246  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:25:46.247312  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:25:46.247369  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:25:46.247421  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:25:46.247464  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:25:46.247511  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:25:46.247555  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:25:46.247626  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:25:46.247719  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:25:46.247811  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:25:46.247872  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:25:46.250950  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:25:46.251041  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:25:46.251105  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:25:46.251224  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:25:46.251290  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:25:46.251369  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:25:46.251431  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:25:46.251495  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:25:46.251555  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:25:46.251629  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:25:46.251704  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:25:46.251741  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:25:46.251795  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:25:46.251845  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:25:46.251899  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:25:46.251951  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:25:46.252012  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:25:46.252065  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:25:46.252149  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:25:46.252213  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:25:46.255065  527777 out.go:252]   - Booting up control plane ...
	I1201 21:25:46.255213  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:25:46.255292  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:25:46.255359  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:25:46.255466  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:25:46.255590  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:25:46.255713  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:25:46.255816  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:25:46.255856  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:25:46.256011  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:25:46.256134  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:25:46.256200  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000272278s
	I1201 21:25:46.256203  527777 kubeadm.go:319] 
	I1201 21:25:46.256259  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:25:46.256290  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:25:46.256400  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:25:46.256404  527777 kubeadm.go:319] 
	I1201 21:25:46.256508  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:25:46.256540  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:25:46.256569  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:25:46.256592  527777 kubeadm.go:319] 
	I1201 21:25:46.256631  527777 kubeadm.go:403] duration metric: took 12m5.895739008s to StartCluster
	I1201 21:25:46.256661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:25:46.256721  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:25:46.286008  527777 cri.go:89] found id: ""
	I1201 21:25:46.286022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.286029  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:25:46.286034  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:25:46.286096  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:25:46.311936  527777 cri.go:89] found id: ""
	I1201 21:25:46.311950  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.311957  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:25:46.311963  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:25:46.312022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:25:46.338008  527777 cri.go:89] found id: ""
	I1201 21:25:46.338022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.338029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:25:46.338035  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:25:46.338094  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:25:46.364430  527777 cri.go:89] found id: ""
	I1201 21:25:46.364446  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.364453  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:25:46.364459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:25:46.364519  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:25:46.390553  527777 cri.go:89] found id: ""
	I1201 21:25:46.390568  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.390574  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:25:46.390580  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:25:46.390638  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:25:46.416135  527777 cri.go:89] found id: ""
	I1201 21:25:46.416149  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.416156  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:25:46.416161  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:25:46.416215  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:25:46.441110  527777 cri.go:89] found id: ""
	I1201 21:25:46.441124  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.441131  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:25:46.441139  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:25:46.441160  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:25:46.456311  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:25:46.456328  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:25:46.535568  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:25:46.535579  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:25:46.535591  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:25:46.613336  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:25:46.613357  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:25:46.643384  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:25:46.643410  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1201 21:25:46.714793  527777 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1201 21:25:46.714844  527777 out.go:285] * 
	W1201 21:25:46.714913  527777 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.714940  527777 out.go:285] * 
	W1201 21:25:46.717121  527777 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:25:46.722121  527777 out.go:203] 
	W1201 21:25:46.725981  527777 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.726037  527777 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1201 21:25:46.726060  527777 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1201 21:25:46.729457  527777 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028303365Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028345423Z" level=info msg="Starting seccomp notifier watcher"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028394488Z" level=info msg="Create NRI interface"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028507921Z" level=info msg="built-in NRI default validator is disabled"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028517906Z" level=info msg="runtime interface created"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028533045Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.02854001Z" level=info msg="runtime interface starting up..."
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028547362Z" level=info msg="starting plugins..."
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028562434Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028635598Z" level=info msg="No systemd watchdog enabled"
	Dec 01 21:13:39 functional-198694 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.006897207Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6899020c-e81d-4ca2-b78d-1b19ba925f8d name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.008172907Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=85515e67-9e24-4eed-9690-db5bbe0ab759 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.009097715Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c8e368b2-0181-4d6d-8bf3-4e28d45c02c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.009733916Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0fc0011a-c9a7-42d2-a5b8-995e0a543565 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.010282103Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=345cda63-6b08-4110-81b2-46c3bae48473 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.010980374Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=5390fbb3-60dc-4145-9a48-c3c46e1b2cb6 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.011663876Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=6ae96308-5839-4062-8cab-2394de4e389c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.162637929Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d0f240a5-2441-4e93-9b4a-f3d4bd7ad9c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.164075956Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0616d176-adc2-492a-ae1c-f0f024bafeaf name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.164807688Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=b908770b-6817-4806-aa77-5607a1538338 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.167796208Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3df13166-7c04-4daf-93af-7c9be539fdad name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.168806661Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=16a3784c-0cb1-4a72-824d-e721ee5352ce name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.16959947Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=de159f67-5257-40aa-8e51-ddafe4c8e78c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.170614172Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1626a35a-f287-41b9-b7fb-8a0f7945ff57 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:25:47.947398   21641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:47.948051   21641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:47.949868   21641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:47.950577   21641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:47.952189   21641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:25:47 up  3:08,  0 user,  load average: 0.07, 0.19, 0.39
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:25:45 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:25:46 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 01 21:25:46 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:46 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:46 functional-198694 kubelet[21452]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:46 functional-198694 kubelet[21452]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:46 functional-198694 kubelet[21452]: E1201 21:25:46.195977   21452 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:25:46 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:25:46 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:25:46 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Dec 01 21:25:46 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:46 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:46 functional-198694 kubelet[21541]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:46 functional-198694 kubelet[21541]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:46 functional-198694 kubelet[21541]: E1201 21:25:46.960722   21541 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:25:46 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:25:46 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:25:47 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 642.
	Dec 01 21:25:47 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:47 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:47 functional-198694 kubelet[21580]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:47 functional-198694 kubelet[21580]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:47 functional-198694 kubelet[21580]: E1201 21:25:47.734730   21580 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:25:47 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:25:47 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (357.223677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (733.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-198694 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-198694 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (61.186119ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-198694 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (317.591851ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-074555 image ls --format yaml --alsologtostderr                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ ssh     │ functional-074555 ssh pgrep buildkitd                                                                                                             │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ image   │ functional-074555 image ls --format json --alsologtostderr                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls --format table --alsologtostderr                                                                                       │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr                                            │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ image   │ functional-074555 image ls                                                                                                                        │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ delete  │ -p functional-074555                                                                                                                              │ functional-074555 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │ 01 Dec 25 20:58 UTC │
	│ start   │ -p functional-198694 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 20:58 UTC │                     │
	│ start   │ -p functional-198694 --alsologtostderr -v=8                                                                                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:07 UTC │                     │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add registry.k8s.io/pause:latest                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache add minikube-local-cache-test:functional-198694                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ functional-198694 cache delete minikube-local-cache-test:functional-198694                                                                        │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl images                                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	│ cache   │ functional-198694 cache reload                                                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ kubectl │ functional-198694 kubectl -- --context functional-198694 get pods                                                                                 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	│ start   │ -p functional-198694 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:13:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:13:35.338314  527777 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:13:35.338426  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.338431  527777 out.go:374] Setting ErrFile to fd 2...
	I1201 21:13:35.338435  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.339011  527777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:13:35.339669  527777 out.go:368] Setting JSON to false
	I1201 21:13:35.340628  527777 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10565,"bootTime":1764613051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:13:35.340767  527777 start.go:143] virtualization:  
	I1201 21:13:35.344231  527777 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:13:35.348003  527777 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:13:35.348182  527777 notify.go:221] Checking for updates...
	I1201 21:13:35.353585  527777 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:13:35.356421  527777 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:13:35.359084  527777 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:13:35.361859  527777 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:13:35.364606  527777 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:13:35.367906  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:35.368004  527777 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:13:35.404299  527777 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:13:35.404422  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.463515  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.453981974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.463609  527777 docker.go:319] overlay module found
	I1201 21:13:35.466875  527777 out.go:179] * Using the docker driver based on existing profile
	I1201 21:13:35.469781  527777 start.go:309] selected driver: docker
	I1201 21:13:35.469793  527777 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.469882  527777 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:13:35.469988  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.530406  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.520549629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.530815  527777 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 21:13:35.530841  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:35.530897  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:35.530938  527777 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.534086  527777 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:13:35.536995  527777 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:13:35.539929  527777 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:13:35.542786  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:35.542873  527777 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:13:35.563189  527777 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:13:35.563200  527777 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:13:35.608993  527777 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:13:35.806403  527777 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:13:35.806571  527777 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:13:35.806600  527777 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806692  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:13:35.806702  527777 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.653µs
	I1201 21:13:35.806710  527777 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:13:35.806721  527777 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806753  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:13:35.806758  527777 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 38.825µs
	I1201 21:13:35.806764  527777 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806774  527777 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806815  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:13:35.806831  527777 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 48.901µs
	I1201 21:13:35.806838  527777 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806850  527777 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:13:35.806851  527777 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806885  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:13:35.806880  527777 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806893  527777 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 44.405µs
	I1201 21:13:35.806899  527777 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806914  527777 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806939  527777 start.go:364] duration metric: took 38.547µs to acquireMachinesLock for "functional-198694"
	I1201 21:13:35.806944  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:13:35.806949  527777 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 42.124µs
	I1201 21:13:35.806954  527777 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806962  527777 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:13:35.806968  527777 fix.go:54] fixHost starting: 
	I1201 21:13:35.806963  527777 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806991  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:13:35.806995  527777 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.558µs
	I1201 21:13:35.807007  527777 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:13:35.807016  527777 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807045  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:13:35.807049  527777 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.657µs
	I1201 21:13:35.807054  527777 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:13:35.807062  527777 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807089  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:13:35.807094  527777 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.54µs
	I1201 21:13:35.807099  527777 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:13:35.807107  527777 cache.go:87] Successfully saved all images to host disk.
	I1201 21:13:35.807314  527777 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:13:35.826290  527777 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:13:35.826315  527777 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:13:35.829729  527777 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:13:35.829761  527777 machine.go:94] provisionDockerMachine start ...
	I1201 21:13:35.829853  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:35.849270  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:35.849646  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:35.849655  527777 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:13:36.014195  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.014211  527777 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:13:36.014280  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.035339  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.035672  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.035681  527777 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:13:36.197202  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.197287  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.217632  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.217935  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.217948  527777 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:13:36.367610  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:13:36.367629  527777 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:13:36.367658  527777 ubuntu.go:190] setting up certificates
	I1201 21:13:36.367666  527777 provision.go:84] configureAuth start
	I1201 21:13:36.367747  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:36.387555  527777 provision.go:143] copyHostCerts
	I1201 21:13:36.387627  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:13:36.387641  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:13:36.387724  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:13:36.387835  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:13:36.387840  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:13:36.387866  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:13:36.387928  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:13:36.387933  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:13:36.387959  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:13:36.388014  527777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:13:36.864413  527777 provision.go:177] copyRemoteCerts
	I1201 21:13:36.864488  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:13:36.864542  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.883147  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:36.987572  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:13:37.015924  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:13:37.037590  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 21:13:37.056483  527777 provision.go:87] duration metric: took 688.787749ms to configureAuth
	I1201 21:13:37.056502  527777 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:13:37.056696  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:37.056802  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.075104  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:37.075454  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:37.075468  527777 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:13:37.432424  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:13:37.432439  527777 machine.go:97] duration metric: took 1.602671146s to provisionDockerMachine
	I1201 21:13:37.432451  527777 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:13:37.432466  527777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:13:37.432544  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:13:37.432606  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.457485  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.563609  527777 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:13:37.567292  527777 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:13:37.567310  527777 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:13:37.567329  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:13:37.567430  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:13:37.567517  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:13:37.567613  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:13:37.567670  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:13:37.575725  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:37.593481  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:13:37.611620  527777 start.go:296] duration metric: took 179.151488ms for postStartSetup
	I1201 21:13:37.611718  527777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:13:37.611798  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.629587  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.732362  527777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:13:37.737388  527777 fix.go:56] duration metric: took 1.930412863s for fixHost
	I1201 21:13:37.737414  527777 start.go:83] releasing machines lock for "functional-198694", held for 1.930466515s
	I1201 21:13:37.737492  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:37.754641  527777 ssh_runner.go:195] Run: cat /version.json
	I1201 21:13:37.754685  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.754954  527777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:13:37.755010  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.773486  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.787845  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.875124  527777 ssh_runner.go:195] Run: systemctl --version
	I1201 21:13:37.974016  527777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:13:38.017000  527777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 21:13:38.021875  527777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:13:38.021957  527777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:13:38.031594  527777 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:13:38.031622  527777 start.go:496] detecting cgroup driver to use...
	I1201 21:13:38.031660  527777 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:13:38.031747  527777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:13:38.049187  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:13:38.064637  527777 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:13:38.064721  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:13:38.083239  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:13:38.097453  527777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:13:38.249215  527777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:13:38.371691  527777 docker.go:234] disabling docker service ...
	I1201 21:13:38.371769  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:13:38.388782  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:13:38.402306  527777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:13:38.513914  527777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:13:38.630153  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:13:38.644475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:13:38.658966  527777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:13:38.659023  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.668135  527777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:13:38.668192  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.677509  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.686682  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.695781  527777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:13:38.704147  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.713420  527777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.722196  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.731481  527777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:13:38.740144  527777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:13:38.748176  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:38.858298  527777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:13:39.035375  527777 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:13:39.035464  527777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:13:39.039668  527777 start.go:564] Will wait 60s for crictl version
	I1201 21:13:39.039730  527777 ssh_runner.go:195] Run: which crictl
	I1201 21:13:39.043260  527777 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:13:39.078386  527777 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:13:39.078499  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.110667  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.146750  527777 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:13:39.149800  527777 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:13:39.166717  527777 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:13:39.173972  527777 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1201 21:13:39.176755  527777 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:13:39.176898  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:39.176968  527777 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:13:39.210945  527777 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:13:39.210958  527777 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:13:39.210965  527777 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:13:39.211070  527777 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:13:39.211187  527777 ssh_runner.go:195] Run: crio config
	I1201 21:13:39.284437  527777 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1201 21:13:39.284481  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:39.284491  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:39.284499  527777 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:13:39.284522  527777 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:13:39.284675  527777 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:13:39.284759  527777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:13:39.293198  527777 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:13:39.293275  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:13:39.301290  527777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:13:39.315108  527777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:13:39.329814  527777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1201 21:13:39.343669  527777 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:13:39.347900  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:39.461077  527777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:13:39.654352  527777 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:13:39.654364  527777 certs.go:195] generating shared ca certs ...
	I1201 21:13:39.654379  527777 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:13:39.654515  527777 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:13:39.654555  527777 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:13:39.654570  527777 certs.go:257] generating profile certs ...
	I1201 21:13:39.654666  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:13:39.654727  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:13:39.654771  527777 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:13:39.654890  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:13:39.654921  527777 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:13:39.654928  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:13:39.654965  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:13:39.655015  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:13:39.655038  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:13:39.655084  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:39.655762  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:13:39.683427  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:13:39.704542  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:13:39.724282  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:13:39.744046  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:13:39.765204  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:13:39.784677  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:13:39.803885  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:13:39.822965  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:13:39.842026  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:13:39.860451  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:13:39.879380  527777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:13:39.893847  527777 ssh_runner.go:195] Run: openssl version
	I1201 21:13:39.900456  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:13:39.910454  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914599  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914672  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.957573  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:13:39.966576  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:13:39.976178  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980649  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980729  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:13:40.025575  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:13:40.037195  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:13:40.047283  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051903  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051976  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.094396  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:13:40.103155  527777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:13:40.107392  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:13:40.150081  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:13:40.192825  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:13:40.234772  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:13:40.276722  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:13:40.318487  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:13:40.360912  527777 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:40.361001  527777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:13:40.361062  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.390972  527777 cri.go:89] found id: ""
	I1201 21:13:40.391046  527777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:13:40.399343  527777 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:13:40.399354  527777 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:13:40.399410  527777 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:13:40.407260  527777 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.407785  527777 kubeconfig.go:125] found "functional-198694" server: "https://192.168.49.2:8441"
	I1201 21:13:40.409130  527777 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:13:40.418081  527777 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-01 20:59:03.175067800 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-01 21:13:39.337074315 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1201 21:13:40.418090  527777 kubeadm.go:1161] stopping kube-system containers ...
	I1201 21:13:40.418103  527777 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 21:13:40.418160  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.458573  527777 cri.go:89] found id: ""
	I1201 21:13:40.458639  527777 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 21:13:40.477506  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:13:40.486524  527777 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  1 21:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  1 21:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  1 21:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  1 21:03 /etc/kubernetes/scheduler.conf
	
	I1201 21:13:40.486611  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:13:40.494590  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:13:40.502887  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.502952  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:13:40.511354  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.519815  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.519872  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.528897  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:13:40.537744  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.537819  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:13:40.546165  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:13:40.555103  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:40.603848  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:41.842196  527777 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.238322261s)
	I1201 21:13:41.842271  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.059194  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.130722  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.199813  527777 api_server.go:52] waiting for apiserver process to appear ...
	I1201 21:13:42.199901  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:42.700072  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.200731  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.700027  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.200776  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.700945  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.200498  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.700869  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.200358  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.700900  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.200833  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.700432  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.200342  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.700205  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.200031  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.700873  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.200171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.700532  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.199969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.700026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.200123  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.700046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.200038  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.700680  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.700097  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.200910  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.700336  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.200957  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.700757  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.200131  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.700100  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.200357  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.700032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.200053  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.700687  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.202701  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.700294  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.200032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.700969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.200893  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.700398  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.200784  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.701004  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.200950  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.200806  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.700896  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.200904  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.700082  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.200046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.700894  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.200914  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.700874  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.200345  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.700662  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.200989  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.700974  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.200085  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.200389  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.200064  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.700099  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.200140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.699984  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.200508  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.700076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.200220  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.200107  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.201026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.700092  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.200816  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.700821  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.200768  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.700817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.200081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.700135  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.200076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.700140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.200109  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.700040  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.700221  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.200360  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.700585  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.200737  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.700431  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.200635  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.699983  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.200340  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.700127  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.200075  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.700352  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.200740  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.700086  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.200338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.200785  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.700903  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.200627  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.700920  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.700285  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.200800  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.200091  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.700843  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.200016  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.700190  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.700171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.200767  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.700973  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.200048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.700746  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.200808  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.700037  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:42.200288  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:42.200384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:42.231074  527777 cri.go:89] found id: ""
	I1201 21:14:42.231090  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.231099  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:42.231105  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:42.231205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:42.260877  527777 cri.go:89] found id: ""
	I1201 21:14:42.260892  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.260900  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:42.260906  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:42.260972  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:42.290930  527777 cri.go:89] found id: ""
	I1201 21:14:42.290944  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.290953  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:42.290960  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:42.291034  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:42.323761  527777 cri.go:89] found id: ""
	I1201 21:14:42.323776  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.323784  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:42.323790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:42.323870  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:42.356722  527777 cri.go:89] found id: ""
	I1201 21:14:42.356738  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.356748  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:42.356756  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:42.356820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:42.387639  527777 cri.go:89] found id: ""
	I1201 21:14:42.387654  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.387661  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:42.387667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:42.387738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:42.433777  527777 cri.go:89] found id: ""
	I1201 21:14:42.433791  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.433798  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:42.433806  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:42.433815  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:42.520716  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:42.520743  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:42.536803  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:42.536820  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:42.605090  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:42.605114  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:42.605125  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:42.679935  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:42.679957  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:45.213941  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:45.229905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:45.229984  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:45.276158  527777 cri.go:89] found id: ""
	I1201 21:14:45.276174  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.276181  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:45.276187  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:45.276259  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:45.307844  527777 cri.go:89] found id: ""
	I1201 21:14:45.307859  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.307867  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:45.307872  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:45.307946  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:45.339831  527777 cri.go:89] found id: ""
	I1201 21:14:45.339845  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.339853  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:45.339858  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:45.339922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:45.371617  527777 cri.go:89] found id: ""
	I1201 21:14:45.371632  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.371640  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:45.371646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:45.371705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:45.399984  527777 cri.go:89] found id: ""
	I1201 21:14:45.400005  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.400012  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:45.400017  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:45.400086  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:45.441742  527777 cri.go:89] found id: ""
	I1201 21:14:45.441755  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.441763  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:45.441769  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:45.441843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:45.474201  527777 cri.go:89] found id: ""
	I1201 21:14:45.474216  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.474223  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:45.474231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:45.474241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:45.541899  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:45.541920  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:45.557525  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:45.557541  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:45.623123  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:45.623165  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:45.623176  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:45.703324  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:45.703344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.232324  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:48.242709  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:48.242767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:48.273768  527777 cri.go:89] found id: ""
	I1201 21:14:48.273782  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.273790  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:48.273795  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:48.273853  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:48.305133  527777 cri.go:89] found id: ""
	I1201 21:14:48.305147  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.305154  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:48.305159  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:48.305218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:48.331706  527777 cri.go:89] found id: ""
	I1201 21:14:48.331720  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.331727  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:48.331733  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:48.331805  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:48.357401  527777 cri.go:89] found id: ""
	I1201 21:14:48.357414  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.357421  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:48.357426  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:48.357485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:48.382601  527777 cri.go:89] found id: ""
	I1201 21:14:48.382615  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.382622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:48.382627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:48.382685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:48.414103  527777 cri.go:89] found id: ""
	I1201 21:14:48.414117  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.414124  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:48.414130  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:48.414192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:48.444275  527777 cri.go:89] found id: ""
	I1201 21:14:48.444289  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.444296  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:48.444304  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:48.444315  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:48.509613  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:48.509633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:48.509645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:48.583849  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:48.583868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.611095  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:48.611113  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:48.678045  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:48.678067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.193681  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:51.204158  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:51.204220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:51.228546  527777 cri.go:89] found id: ""
	I1201 21:14:51.228560  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.228567  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:51.228573  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:51.228641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:51.253363  527777 cri.go:89] found id: ""
	I1201 21:14:51.253377  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.253384  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:51.253389  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:51.253450  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:51.281388  527777 cri.go:89] found id: ""
	I1201 21:14:51.281403  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.281410  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:51.281415  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:51.281472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:51.312321  527777 cri.go:89] found id: ""
	I1201 21:14:51.312334  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.312341  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:51.312347  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:51.312404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:51.338071  527777 cri.go:89] found id: ""
	I1201 21:14:51.338084  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.338092  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:51.338097  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:51.338160  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:51.362911  527777 cri.go:89] found id: ""
	I1201 21:14:51.362925  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.362932  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:51.362938  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:51.362996  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:51.392560  527777 cri.go:89] found id: ""
	I1201 21:14:51.392575  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.392582  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:51.392589  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:51.392600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:51.462446  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:51.462465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.483328  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:51.483345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:51.550537  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:51.550546  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:51.550556  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:51.627463  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:51.627484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:54.160747  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:54.171038  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:54.171098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:54.197306  527777 cri.go:89] found id: ""
	I1201 21:14:54.197320  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.197327  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:54.197333  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:54.197389  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:54.227205  527777 cri.go:89] found id: ""
	I1201 21:14:54.227219  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.227226  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:54.227232  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:54.227293  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:54.254126  527777 cri.go:89] found id: ""
	I1201 21:14:54.254141  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.254149  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:54.254156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:54.254218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:54.282152  527777 cri.go:89] found id: ""
	I1201 21:14:54.282166  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.282173  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:54.282178  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:54.282234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:54.312220  527777 cri.go:89] found id: ""
	I1201 21:14:54.312234  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.312241  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:54.312246  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:54.312314  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:54.338233  527777 cri.go:89] found id: ""
	I1201 21:14:54.338247  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.338253  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:54.338259  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:54.338317  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:54.364068  527777 cri.go:89] found id: ""
	I1201 21:14:54.364082  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.364089  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:54.364097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:54.364119  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:54.429655  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:54.429673  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:54.445696  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:54.445712  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:54.514079  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:54.514090  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:54.514100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:54.590504  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:54.590526  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.119842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:57.129802  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:57.129862  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:57.154250  527777 cri.go:89] found id: ""
	I1201 21:14:57.154263  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.154271  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:57.154276  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:57.154332  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:57.179738  527777 cri.go:89] found id: ""
	I1201 21:14:57.179761  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.179768  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:57.179775  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:57.179838  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:57.209881  527777 cri.go:89] found id: ""
	I1201 21:14:57.209895  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.209902  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:57.209907  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:57.209964  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:57.239761  527777 cri.go:89] found id: ""
	I1201 21:14:57.239775  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.239782  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:57.239787  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:57.239851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:57.265438  527777 cri.go:89] found id: ""
	I1201 21:14:57.265457  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.265464  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:57.265470  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:57.265531  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:57.292117  527777 cri.go:89] found id: ""
	I1201 21:14:57.292131  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.292139  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:57.292145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:57.292211  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:57.321507  527777 cri.go:89] found id: ""
	I1201 21:14:57.321526  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.321539  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:57.321547  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:57.321562  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.355489  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:57.355506  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:57.422253  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:57.422274  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:57.439866  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:57.439884  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:57.517974  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:57.517984  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:57.517997  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.095116  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:00.167383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:00.167484  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:00.305857  527777 cri.go:89] found id: ""
	I1201 21:15:00.305874  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.305881  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:00.305888  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:00.305960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:00.412948  527777 cri.go:89] found id: ""
	I1201 21:15:00.412964  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.412972  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:00.412979  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:00.413063  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:00.497486  527777 cri.go:89] found id: ""
	I1201 21:15:00.497503  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.497511  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:00.497517  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:00.497588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:00.548544  527777 cri.go:89] found id: ""
	I1201 21:15:00.548558  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.548565  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:00.548571  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:00.548635  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:00.594658  527777 cri.go:89] found id: ""
	I1201 21:15:00.594674  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.594682  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:00.594688  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:00.594758  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:00.625642  527777 cri.go:89] found id: ""
	I1201 21:15:00.625658  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.625665  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:00.625672  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:00.625741  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:00.657944  527777 cri.go:89] found id: ""
	I1201 21:15:00.657968  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.657977  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:00.657987  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:00.657999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:00.741394  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:00.741407  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:00.741425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.821320  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:00.821344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:00.857348  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:00.857380  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:00.927631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:00.927652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.446387  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:03.456673  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:03.456742  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:03.481752  527777 cri.go:89] found id: ""
	I1201 21:15:03.481766  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.481773  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:03.481779  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:03.481837  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:03.509959  527777 cri.go:89] found id: ""
	I1201 21:15:03.509974  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.509982  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:03.509987  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:03.510050  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:03.536645  527777 cri.go:89] found id: ""
	I1201 21:15:03.536659  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.536665  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:03.536671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:03.536738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:03.562917  527777 cri.go:89] found id: ""
	I1201 21:15:03.562932  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.562939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:03.562945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:03.563005  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:03.589891  527777 cri.go:89] found id: ""
	I1201 21:15:03.589905  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.589912  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:03.589918  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:03.589977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:03.622362  527777 cri.go:89] found id: ""
	I1201 21:15:03.622376  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.622384  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:03.622390  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:03.622451  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:03.649882  527777 cri.go:89] found id: ""
	I1201 21:15:03.649897  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.649904  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:03.649912  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:03.649922  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:03.726812  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:03.726832  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.741643  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:03.741659  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:03.807830  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:03.807840  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:03.807851  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:03.882248  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:03.882268  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.412792  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:06.423457  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:06.423520  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:06.450416  527777 cri.go:89] found id: ""
	I1201 21:15:06.450434  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.450441  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:06.450461  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:06.450552  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:06.476229  527777 cri.go:89] found id: ""
	I1201 21:15:06.476243  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.476251  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:06.476257  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:06.476313  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:06.504311  527777 cri.go:89] found id: ""
	I1201 21:15:06.504326  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.504333  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:06.504339  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:06.504400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:06.531500  527777 cri.go:89] found id: ""
	I1201 21:15:06.531515  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.531523  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:06.531529  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:06.531598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:06.557205  527777 cri.go:89] found id: ""
	I1201 21:15:06.557219  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.557226  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:06.557231  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:06.557296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:06.583224  527777 cri.go:89] found id: ""
	I1201 21:15:06.583237  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.583244  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:06.583250  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:06.583309  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:06.609560  527777 cri.go:89] found id: ""
	I1201 21:15:06.609574  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.609581  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:06.609589  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:06.609600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:06.688119  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:06.688138  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.718171  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:06.718187  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:06.788360  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:06.788382  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:06.803516  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:06.803532  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:06.871576  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.373262  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:09.384129  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:09.384191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:09.415353  527777 cri.go:89] found id: ""
	I1201 21:15:09.415369  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.415377  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:09.415384  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:09.415449  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:09.441666  527777 cri.go:89] found id: ""
	I1201 21:15:09.441681  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.441689  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:09.441707  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:09.441773  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:09.468735  527777 cri.go:89] found id: ""
	I1201 21:15:09.468749  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.468756  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:09.468761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:09.468820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:09.495871  527777 cri.go:89] found id: ""
	I1201 21:15:09.495885  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.495892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:09.495898  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:09.495960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:09.522124  527777 cri.go:89] found id: ""
	I1201 21:15:09.522138  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.522145  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:09.522151  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:09.522222  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:09.548540  527777 cri.go:89] found id: ""
	I1201 21:15:09.548554  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.548562  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:09.548568  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:09.548628  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:09.581799  527777 cri.go:89] found id: ""
	I1201 21:15:09.581814  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.581823  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:09.581831  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:09.581842  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:09.653172  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:09.653196  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:09.668649  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:09.668666  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:09.742062  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.742072  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:09.742085  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:09.817239  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:09.817259  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.348410  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:12.358969  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:12.359036  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:12.384762  527777 cri.go:89] found id: ""
	I1201 21:15:12.384776  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.384783  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:12.384788  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:12.384849  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:12.411423  527777 cri.go:89] found id: ""
	I1201 21:15:12.411437  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.411444  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:12.411449  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:12.411508  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:12.436624  527777 cri.go:89] found id: ""
	I1201 21:15:12.436638  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.436645  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:12.436650  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:12.436708  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:12.462632  527777 cri.go:89] found id: ""
	I1201 21:15:12.462647  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.462654  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:12.462661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:12.462724  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:12.488511  527777 cri.go:89] found id: ""
	I1201 21:15:12.488526  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.488537  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:12.488542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:12.488601  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:12.514421  527777 cri.go:89] found id: ""
	I1201 21:15:12.514434  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.514441  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:12.514448  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:12.514513  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:12.541557  527777 cri.go:89] found id: ""
	I1201 21:15:12.541571  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.541579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:12.541587  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:12.541598  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.573231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:12.573249  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:12.641686  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:12.641707  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:12.658713  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:12.658727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:12.743144  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:12.743155  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:12.743166  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.318465  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:15.329023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:15.329088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:15.358063  527777 cri.go:89] found id: ""
	I1201 21:15:15.358077  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.358084  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:15.358090  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:15.358148  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:15.387949  527777 cri.go:89] found id: ""
	I1201 21:15:15.387963  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.387971  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:15.387976  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:15.388040  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:15.414396  527777 cri.go:89] found id: ""
	I1201 21:15:15.414412  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.414420  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:15.414425  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:15.414489  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:15.440368  527777 cri.go:89] found id: ""
	I1201 21:15:15.440383  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.440390  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:15.440396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:15.440455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:15.471515  527777 cri.go:89] found id: ""
	I1201 21:15:15.471529  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.471538  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:15.471544  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:15.471605  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:15.502736  527777 cri.go:89] found id: ""
	I1201 21:15:15.502750  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.502764  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:15.502770  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:15.502834  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:15.530525  527777 cri.go:89] found id: ""
	I1201 21:15:15.530540  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.530548  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:15.530555  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:15.530566  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:15.597211  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:15.597221  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:15.597232  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.673960  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:15.673983  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:15.708635  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:15.708651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:15.779672  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:15.779693  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.296490  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:18.307184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:18.307258  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:18.340992  527777 cri.go:89] found id: ""
	I1201 21:15:18.341006  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.341021  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:18.341027  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:18.341093  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:18.370602  527777 cri.go:89] found id: ""
	I1201 21:15:18.370626  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.370633  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:18.370642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:18.370713  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:18.398425  527777 cri.go:89] found id: ""
	I1201 21:15:18.398440  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.398447  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:18.398453  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:18.398527  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:18.424514  527777 cri.go:89] found id: ""
	I1201 21:15:18.424530  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.424537  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:18.424561  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:18.424641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:18.451718  527777 cri.go:89] found id: ""
	I1201 21:15:18.451732  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.451740  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:18.451746  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:18.451806  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:18.481779  527777 cri.go:89] found id: ""
	I1201 21:15:18.481804  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.481812  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:18.481818  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:18.481885  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:18.509744  527777 cri.go:89] found id: ""
	I1201 21:15:18.509760  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.509767  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:18.509775  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:18.509800  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:18.541318  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:18.541335  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:18.608586  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:18.608608  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.625859  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:18.625885  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:18.721362  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:18.721371  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:18.721383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.298842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:21.309420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:21.309481  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:21.339650  527777 cri.go:89] found id: ""
	I1201 21:15:21.339664  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.339672  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:21.339678  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:21.339739  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:21.369828  527777 cri.go:89] found id: ""
	I1201 21:15:21.369843  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.369850  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:21.369857  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:21.369925  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:21.396833  527777 cri.go:89] found id: ""
	I1201 21:15:21.396860  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.396868  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:21.396874  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:21.396948  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:21.423340  527777 cri.go:89] found id: ""
	I1201 21:15:21.423354  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.423363  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:21.423369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:21.423429  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:21.450028  527777 cri.go:89] found id: ""
	I1201 21:15:21.450041  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.450051  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:21.450057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:21.450115  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:21.476290  527777 cri.go:89] found id: ""
	I1201 21:15:21.476305  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.476312  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:21.476317  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:21.476378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:21.503570  527777 cri.go:89] found id: ""
	I1201 21:15:21.503591  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.503599  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:21.503607  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:21.503622  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:21.518970  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:21.518995  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:21.583522  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:21.583581  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:21.583592  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.662707  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:21.662730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:21.693467  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:21.693484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.268299  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:24.279383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:24.279455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:24.305720  527777 cri.go:89] found id: ""
	I1201 21:15:24.305733  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.305741  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:24.305746  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:24.305809  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:24.333862  527777 cri.go:89] found id: ""
	I1201 21:15:24.333878  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.333885  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:24.333891  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:24.333965  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:24.365916  527777 cri.go:89] found id: ""
	I1201 21:15:24.365931  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.365939  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:24.365948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:24.366009  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:24.393185  527777 cri.go:89] found id: ""
	I1201 21:15:24.393202  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.393209  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:24.393216  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:24.393279  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:24.419532  527777 cri.go:89] found id: ""
	I1201 21:15:24.419547  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.419554  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:24.419560  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:24.419629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:24.445529  527777 cri.go:89] found id: ""
	I1201 21:15:24.445543  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.445550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:24.445557  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:24.445619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:24.470988  527777 cri.go:89] found id: ""
	I1201 21:15:24.471002  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.471009  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:24.471017  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:24.471028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:24.500416  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:24.500433  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.566009  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:24.566028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:24.582350  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:24.582366  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:24.653085  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:24.653095  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:24.653106  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:27.239323  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:27.250432  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:27.250495  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:27.276796  527777 cri.go:89] found id: ""
	I1201 21:15:27.276824  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.276832  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:27.276837  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:27.276927  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:27.303592  527777 cri.go:89] found id: ""
	I1201 21:15:27.303607  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.303614  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:27.303620  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:27.303685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:27.330141  527777 cri.go:89] found id: ""
	I1201 21:15:27.330155  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.330163  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:27.330168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:27.330231  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:27.358477  527777 cri.go:89] found id: ""
	I1201 21:15:27.358491  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.358498  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:27.358503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:27.358570  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:27.384519  527777 cri.go:89] found id: ""
	I1201 21:15:27.384533  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.384541  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:27.384547  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:27.384610  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:27.410788  527777 cri.go:89] found id: ""
	I1201 21:15:27.410804  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.410811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:27.410817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:27.410880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:27.437727  527777 cri.go:89] found id: ""
	I1201 21:15:27.437742  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.437748  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:27.437756  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:27.437766  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:27.470359  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:27.470376  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:27.540219  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:27.540239  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:27.558165  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:27.558184  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:27.631990  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:27.632001  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:27.632013  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:30.214048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:30.225906  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:30.225977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:30.254528  527777 cri.go:89] found id: ""
	I1201 21:15:30.254544  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.254552  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:30.254559  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:30.254627  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:30.282356  527777 cri.go:89] found id: ""
	I1201 21:15:30.282371  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.282379  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:30.282385  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:30.282454  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:30.316244  527777 cri.go:89] found id: ""
	I1201 21:15:30.316266  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.316275  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:30.316281  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:30.316356  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:30.349310  527777 cri.go:89] found id: ""
	I1201 21:15:30.349324  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.349338  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:30.349345  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:30.349413  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:30.379233  527777 cri.go:89] found id: ""
	I1201 21:15:30.379259  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.379267  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:30.379273  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:30.379344  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:30.410578  527777 cri.go:89] found id: ""
	I1201 21:15:30.410592  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.410600  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:30.410607  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:30.410715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:30.439343  527777 cri.go:89] found id: ""
	I1201 21:15:30.439357  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.439365  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:30.439373  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:30.439383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:30.469722  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:30.469742  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:30.536977  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:30.536999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:30.552719  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:30.552738  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:30.625200  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:30.625210  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:30.625221  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.202525  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:33.213081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:33.213144  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:33.239684  527777 cri.go:89] found id: ""
	I1201 21:15:33.239699  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.239707  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:33.239713  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:33.239777  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:33.270046  527777 cri.go:89] found id: ""
	I1201 21:15:33.270060  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.270067  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:33.270073  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:33.270134  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:33.298615  527777 cri.go:89] found id: ""
	I1201 21:15:33.298631  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.298639  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:33.298646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:33.298715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:33.330389  527777 cri.go:89] found id: ""
	I1201 21:15:33.330403  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.330410  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:33.330416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:33.330472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:33.356054  527777 cri.go:89] found id: ""
	I1201 21:15:33.356068  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.356075  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:33.356081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:33.356147  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:33.385771  527777 cri.go:89] found id: ""
	I1201 21:15:33.385784  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.385792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:33.385797  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:33.385852  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:33.412562  527777 cri.go:89] found id: ""
	I1201 21:15:33.412580  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.412587  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:33.412601  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:33.412616  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:33.478848  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:33.478868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:33.494280  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:33.494296  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:33.574855  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:33.574866  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:33.574876  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.653087  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:33.653110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:36.198878  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:36.209291  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:36.209352  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:36.234666  527777 cri.go:89] found id: ""
	I1201 21:15:36.234679  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.234686  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:36.234691  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:36.234747  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:36.260740  527777 cri.go:89] found id: ""
	I1201 21:15:36.260754  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.260762  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:36.260767  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:36.260830  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:36.290674  527777 cri.go:89] found id: ""
	I1201 21:15:36.290688  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.290695  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:36.290700  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:36.290800  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:36.317381  527777 cri.go:89] found id: ""
	I1201 21:15:36.317396  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.317404  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:36.317410  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:36.317477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:36.346371  527777 cri.go:89] found id: ""
	I1201 21:15:36.346384  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.346391  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:36.346396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:36.346458  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:36.374545  527777 cri.go:89] found id: ""
	I1201 21:15:36.374559  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.374567  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:36.374573  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:36.374632  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:36.400298  527777 cri.go:89] found id: ""
	I1201 21:15:36.400324  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.400332  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:36.400339  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:36.400350  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:36.468826  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:36.468850  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:36.484335  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:36.484351  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:36.549841  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:36.549853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:36.549864  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:36.630562  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:36.630587  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:39.169136  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:39.182222  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:39.182296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:39.212188  527777 cri.go:89] found id: ""
	I1201 21:15:39.212202  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.212208  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:39.212213  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:39.212270  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:39.237215  527777 cri.go:89] found id: ""
	I1201 21:15:39.237229  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.237236  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:39.237241  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:39.237298  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:39.262205  527777 cri.go:89] found id: ""
	I1201 21:15:39.262219  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.262226  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:39.262232  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:39.262288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:39.290471  527777 cri.go:89] found id: ""
	I1201 21:15:39.290485  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.290492  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:39.290498  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:39.290559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:39.316212  527777 cri.go:89] found id: ""
	I1201 21:15:39.316238  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.316245  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:39.316251  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:39.316329  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:39.341014  527777 cri.go:89] found id: ""
	I1201 21:15:39.341037  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.341045  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:39.341051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:39.341109  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:39.375032  527777 cri.go:89] found id: ""
	I1201 21:15:39.375058  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.375067  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:39.375083  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:39.375093  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:39.447422  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:39.447444  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:39.462737  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:39.462754  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:39.534298  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:39.534310  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:39.534320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:39.611187  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:39.611208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.146214  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:42.159004  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:42.159073  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:42.195922  527777 cri.go:89] found id: ""
	I1201 21:15:42.195938  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.195946  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:42.195952  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:42.196022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:42.230178  527777 cri.go:89] found id: ""
	I1201 21:15:42.230193  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.230200  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:42.230206  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:42.230271  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:42.261082  527777 cri.go:89] found id: ""
	I1201 21:15:42.261098  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.261105  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:42.261111  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:42.261188  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:42.295345  527777 cri.go:89] found id: ""
	I1201 21:15:42.295361  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.295377  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:42.295383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:42.295457  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:42.330093  527777 cri.go:89] found id: ""
	I1201 21:15:42.330109  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.330116  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:42.330122  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:42.330186  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:42.358733  527777 cri.go:89] found id: ""
	I1201 21:15:42.358748  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.358756  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:42.358761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:42.358823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:42.388218  527777 cri.go:89] found id: ""
	I1201 21:15:42.388233  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.388240  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:42.388247  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:42.388258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:42.469165  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:42.469185  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.500328  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:42.500345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:42.569622  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:42.569642  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:42.585628  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:42.585645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:42.654077  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.155990  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:45.177587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:45.177664  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:45.216123  527777 cri.go:89] found id: ""
	I1201 21:15:45.216141  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.216149  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:45.216155  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:45.216241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:45.257016  527777 cri.go:89] found id: ""
	I1201 21:15:45.257036  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.257044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:45.257053  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:45.257139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:45.310072  527777 cri.go:89] found id: ""
	I1201 21:15:45.310087  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.310095  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:45.310101  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:45.310165  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:45.339040  527777 cri.go:89] found id: ""
	I1201 21:15:45.339054  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.339062  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:45.339068  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:45.339154  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:45.370340  527777 cri.go:89] found id: ""
	I1201 21:15:45.370354  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.370361  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:45.370366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:45.370426  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:45.396213  527777 cri.go:89] found id: ""
	I1201 21:15:45.396227  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.396234  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:45.396240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:45.396299  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:45.423726  527777 cri.go:89] found id: ""
	I1201 21:15:45.423745  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.423755  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:45.423773  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:45.423784  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:45.490150  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.490161  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:45.490172  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:45.565908  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:45.565926  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:45.598740  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:45.598755  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:45.666263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:45.666281  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.183348  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:48.193996  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:48.194068  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:48.221096  527777 cri.go:89] found id: ""
	I1201 21:15:48.221110  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.221117  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:48.221123  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:48.221180  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:48.247305  527777 cri.go:89] found id: ""
	I1201 21:15:48.247320  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.247328  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:48.247333  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:48.247392  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:48.277432  527777 cri.go:89] found id: ""
	I1201 21:15:48.277447  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.277453  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:48.277459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:48.277521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:48.304618  527777 cri.go:89] found id: ""
	I1201 21:15:48.304636  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.304643  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:48.304649  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:48.304712  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:48.331672  527777 cri.go:89] found id: ""
	I1201 21:15:48.331686  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.331694  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:48.331699  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:48.331757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:48.360554  527777 cri.go:89] found id: ""
	I1201 21:15:48.360569  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.360577  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:48.360583  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:48.360640  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:48.385002  527777 cri.go:89] found id: ""
	I1201 21:15:48.385016  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.385023  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:48.385032  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:48.385043  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:48.414019  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:48.414036  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:48.479945  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:48.479964  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.495187  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:48.495206  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:48.560181  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:48.560191  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:48.560203  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.136751  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:51.147836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:51.147914  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:51.178020  527777 cri.go:89] found id: ""
	I1201 21:15:51.178033  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.178041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:51.178046  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:51.178106  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:51.206023  527777 cri.go:89] found id: ""
	I1201 21:15:51.206036  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.206044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:51.206049  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:51.206150  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:51.236344  527777 cri.go:89] found id: ""
	I1201 21:15:51.236359  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.236366  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:51.236371  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:51.236434  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:51.262331  527777 cri.go:89] found id: ""
	I1201 21:15:51.262346  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.262353  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:51.262359  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:51.262419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:51.290923  527777 cri.go:89] found id: ""
	I1201 21:15:51.290936  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.290944  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:51.290949  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:51.291016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:51.318520  527777 cri.go:89] found id: ""
	I1201 21:15:51.318535  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.318542  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:51.318548  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:51.318607  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:51.345816  527777 cri.go:89] found id: ""
	I1201 21:15:51.345830  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.345837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:51.345845  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:51.345857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:51.361084  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:51.361100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:51.427299  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:51.427309  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:51.427320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.502906  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:51.502929  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:51.533675  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:51.533691  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.100640  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:54.111984  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:54.112047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:54.137333  527777 cri.go:89] found id: ""
	I1201 21:15:54.137347  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.137353  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:54.137360  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:54.137419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:54.166609  527777 cri.go:89] found id: ""
	I1201 21:15:54.166624  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.166635  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:54.166640  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:54.166705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:54.193412  527777 cri.go:89] found id: ""
	I1201 21:15:54.193434  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.193441  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:54.193447  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:54.193509  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:54.219156  527777 cri.go:89] found id: ""
	I1201 21:15:54.219171  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.219178  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:54.219184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:54.219241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:54.248184  527777 cri.go:89] found id: ""
	I1201 21:15:54.248197  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.248204  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:54.248210  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:54.248278  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:54.274909  527777 cri.go:89] found id: ""
	I1201 21:15:54.274923  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.274931  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:54.274936  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:54.275003  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:54.300114  527777 cri.go:89] found id: ""
	I1201 21:15:54.300128  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.300135  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:54.300143  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:54.300154  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.366293  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:54.366312  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:54.382194  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:54.382210  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:54.446526  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:54.446536  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:54.446548  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:54.525097  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:54.525120  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.056605  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:57.067114  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:57.067185  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:57.096913  527777 cri.go:89] found id: ""
	I1201 21:15:57.096926  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.096933  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:57.096939  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:57.096995  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:57.124785  527777 cri.go:89] found id: ""
	I1201 21:15:57.124799  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.124806  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:57.124812  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:57.124877  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:57.151613  527777 cri.go:89] found id: ""
	I1201 21:15:57.151628  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.151635  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:57.151640  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:57.151702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:57.181422  527777 cri.go:89] found id: ""
	I1201 21:15:57.181437  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.181445  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:57.181451  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:57.181510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:57.207775  527777 cri.go:89] found id: ""
	I1201 21:15:57.207789  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.207796  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:57.207801  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:57.207861  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:57.232906  527777 cri.go:89] found id: ""
	I1201 21:15:57.232931  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.232939  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:57.232945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:57.233016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:57.259075  527777 cri.go:89] found id: ""
	I1201 21:15:57.259100  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.259107  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:57.259115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:57.259126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.288148  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:57.288164  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:57.355525  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:57.355545  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:57.371229  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:57.371246  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:57.439767  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:57.439779  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:57.439791  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.016574  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:00.063670  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:00.063743  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:00.181922  527777 cri.go:89] found id: ""
	I1201 21:16:00.181939  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.181947  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:00.181954  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:00.183169  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:00.318653  527777 cri.go:89] found id: ""
	I1201 21:16:00.318668  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.318676  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:00.318682  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:00.318752  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:00.366365  527777 cri.go:89] found id: ""
	I1201 21:16:00.366381  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.366391  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:00.366398  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:00.366497  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:00.432333  527777 cri.go:89] found id: ""
	I1201 21:16:00.432349  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.432358  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:00.432364  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:00.432436  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:00.487199  527777 cri.go:89] found id: ""
	I1201 21:16:00.487216  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.487238  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:00.487244  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:00.487315  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:00.541398  527777 cri.go:89] found id: ""
	I1201 21:16:00.541429  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.541438  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:00.541444  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:00.541530  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:00.577064  527777 cri.go:89] found id: ""
	I1201 21:16:00.577082  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.577095  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:00.577103  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:00.577116  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:00.646395  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:00.646418  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:00.667724  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:00.667741  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:00.750849  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:00.750860  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:00.750872  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.828858  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:00.828881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.360481  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:03.371537  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:03.371611  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:03.401359  527777 cri.go:89] found id: ""
	I1201 21:16:03.401373  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.401380  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:03.401385  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:03.401452  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:03.428335  527777 cri.go:89] found id: ""
	I1201 21:16:03.428350  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.428358  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:03.428363  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:03.428424  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:03.460610  527777 cri.go:89] found id: ""
	I1201 21:16:03.460623  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.460630  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:03.460636  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:03.460695  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:03.489139  527777 cri.go:89] found id: ""
	I1201 21:16:03.489153  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.489161  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:03.489168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:03.489234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:03.519388  527777 cri.go:89] found id: ""
	I1201 21:16:03.519410  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.519418  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:03.519423  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:03.519490  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:03.549588  527777 cri.go:89] found id: ""
	I1201 21:16:03.549602  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.549610  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:03.549615  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:03.549678  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:03.576025  527777 cri.go:89] found id: ""
	I1201 21:16:03.576039  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.576047  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:03.576055  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:03.576066  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.605415  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:03.605431  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:03.675775  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:03.675797  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:03.691777  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:03.691793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:03.765238  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:03.765250  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:03.765263  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.346338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:06.356267  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:06.356325  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:06.380678  527777 cri.go:89] found id: ""
	I1201 21:16:06.380691  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.380717  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:06.380723  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:06.380780  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:06.410489  527777 cri.go:89] found id: ""
	I1201 21:16:06.410503  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.410518  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:06.410524  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:06.410588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:06.443231  527777 cri.go:89] found id: ""
	I1201 21:16:06.443250  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.443257  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:06.443263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:06.443334  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:06.468603  527777 cri.go:89] found id: ""
	I1201 21:16:06.468618  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.468625  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:06.468631  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:06.468700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:06.493128  527777 cri.go:89] found id: ""
	I1201 21:16:06.493141  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.493148  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:06.493154  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:06.493212  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:06.518860  527777 cri.go:89] found id: ""
	I1201 21:16:06.518874  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.518881  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:06.518886  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:06.518958  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:06.545817  527777 cri.go:89] found id: ""
	I1201 21:16:06.545831  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.545839  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:06.545846  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:06.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:06.610356  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:06.610378  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:06.625472  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:06.625488  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:06.722623  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:06.722633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:06.722648  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.798208  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:06.798228  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.328391  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:09.339639  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:09.339706  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:09.368398  527777 cri.go:89] found id: ""
	I1201 21:16:09.368421  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.368428  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:09.368434  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:09.368512  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:09.398525  527777 cri.go:89] found id: ""
	I1201 21:16:09.398540  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.398548  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:09.398553  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:09.398615  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:09.426105  527777 cri.go:89] found id: ""
	I1201 21:16:09.426121  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.426129  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:09.426145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:09.426205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:09.456433  527777 cri.go:89] found id: ""
	I1201 21:16:09.456449  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.456456  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:09.456462  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:09.456525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:09.488473  527777 cri.go:89] found id: ""
	I1201 21:16:09.488488  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.488495  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:09.488503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:09.488563  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:09.514937  527777 cri.go:89] found id: ""
	I1201 21:16:09.514951  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.514958  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:09.514964  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:09.515027  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:09.545815  527777 cri.go:89] found id: ""
	I1201 21:16:09.545829  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.545837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:09.545845  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:09.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.575097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:09.575115  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:09.642216  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:09.642237  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:09.663629  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:09.663645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:09.745863  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:09.745876  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:09.745888  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.327853  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:12.338928  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:12.338992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:12.372550  527777 cri.go:89] found id: ""
	I1201 21:16:12.372583  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.372591  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:12.372597  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:12.372662  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:12.402760  527777 cri.go:89] found id: ""
	I1201 21:16:12.402776  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.402784  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:12.402790  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:12.402851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:12.429193  527777 cri.go:89] found id: ""
	I1201 21:16:12.429208  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.429215  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:12.429221  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:12.429286  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:12.456952  527777 cri.go:89] found id: ""
	I1201 21:16:12.456966  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.456973  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:12.456978  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:12.457037  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:12.483859  527777 cri.go:89] found id: ""
	I1201 21:16:12.483874  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.483881  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:12.483887  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:12.483950  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:12.510218  527777 cri.go:89] found id: ""
	I1201 21:16:12.510234  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.510242  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:12.510248  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:12.510323  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:12.536841  527777 cri.go:89] found id: ""
	I1201 21:16:12.536856  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.536864  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:12.536871  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:12.536881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.612682  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:12.612702  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:12.641218  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:12.641235  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:12.719908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:12.719930  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:12.736058  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:12.736077  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:12.803643  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.304417  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:15.314647  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:15.314707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:15.342468  527777 cri.go:89] found id: ""
	I1201 21:16:15.342483  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.342491  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:15.342497  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:15.342559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:15.369048  527777 cri.go:89] found id: ""
	I1201 21:16:15.369063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.369071  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:15.369077  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:15.369140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:15.393869  527777 cri.go:89] found id: ""
	I1201 21:16:15.393884  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.393891  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:15.393897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:15.393960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:15.420049  527777 cri.go:89] found id: ""
	I1201 21:16:15.420063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.420071  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:15.420077  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:15.420136  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:15.450112  527777 cri.go:89] found id: ""
	I1201 21:16:15.450126  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.450134  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:15.450140  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:15.450201  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:15.475788  527777 cri.go:89] found id: ""
	I1201 21:16:15.475803  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.475811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:15.475884  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:15.502058  527777 cri.go:89] found id: ""
	I1201 21:16:15.502072  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.502084  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:15.502092  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:15.502102  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:15.535936  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:15.535953  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:15.601548  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:15.601568  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:15.617150  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:15.617167  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:15.694491  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.694502  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:15.694514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.282089  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:18.292620  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:18.292687  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:18.320483  527777 cri.go:89] found id: ""
	I1201 21:16:18.320497  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.320504  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:18.320510  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:18.320569  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:18.346376  527777 cri.go:89] found id: ""
	I1201 21:16:18.346389  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.346397  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:18.346402  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:18.346459  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:18.377534  527777 cri.go:89] found id: ""
	I1201 21:16:18.377549  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.377557  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:18.377562  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:18.377619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:18.402867  527777 cri.go:89] found id: ""
	I1201 21:16:18.402882  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.402892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:18.402897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:18.402952  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:18.429104  527777 cri.go:89] found id: ""
	I1201 21:16:18.429119  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.429126  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:18.429132  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:18.429193  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:18.455237  527777 cri.go:89] found id: ""
	I1201 21:16:18.455251  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.455257  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:18.455263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:18.455330  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:18.480176  527777 cri.go:89] found id: ""
	I1201 21:16:18.480190  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.480197  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:18.480205  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:18.480215  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.554692  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:18.554713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:18.586044  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:18.586062  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:18.654056  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:18.654076  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:18.670115  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:18.670131  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:18.739729  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.240925  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:21.251332  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:21.251400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:21.277213  527777 cri.go:89] found id: ""
	I1201 21:16:21.277228  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.277266  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:21.277275  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:21.277349  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:21.304294  527777 cri.go:89] found id: ""
	I1201 21:16:21.304308  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.304316  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:21.304321  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:21.304393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:21.331354  527777 cri.go:89] found id: ""
	I1201 21:16:21.331369  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.331377  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:21.331382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:21.331455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:21.358548  527777 cri.go:89] found id: ""
	I1201 21:16:21.358563  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.358571  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:21.358577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:21.358637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:21.384228  527777 cri.go:89] found id: ""
	I1201 21:16:21.384242  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.384250  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:21.384255  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:21.384321  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:21.413560  527777 cri.go:89] found id: ""
	I1201 21:16:21.413574  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.413581  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:21.413587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:21.413647  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:21.439790  527777 cri.go:89] found id: ""
	I1201 21:16:21.439805  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.439813  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:21.439821  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:21.439839  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:21.505587  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:21.505607  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:21.522038  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:21.522064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:21.590692  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.590718  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:21.590730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:21.667703  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:21.667727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.203209  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:24.214159  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:24.214230  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:24.242378  527777 cri.go:89] found id: ""
	I1201 21:16:24.242392  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.242399  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:24.242405  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:24.242486  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:24.269017  527777 cri.go:89] found id: ""
	I1201 21:16:24.269032  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.269039  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:24.269045  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:24.269103  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:24.295927  527777 cri.go:89] found id: ""
	I1201 21:16:24.295942  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.295949  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:24.295955  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:24.296019  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:24.321917  527777 cri.go:89] found id: ""
	I1201 21:16:24.321932  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.321939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:24.321944  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:24.322012  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:24.350147  527777 cri.go:89] found id: ""
	I1201 21:16:24.350163  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.350171  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:24.350177  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:24.350250  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:24.376131  527777 cri.go:89] found id: ""
	I1201 21:16:24.376145  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.376153  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:24.376160  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:24.376220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:24.403024  527777 cri.go:89] found id: ""
	I1201 21:16:24.403039  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.403046  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:24.403055  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:24.403068  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:24.418212  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:24.418230  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:24.486448  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:24.486460  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:24.486472  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:24.563285  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:24.563307  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.597003  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:24.597023  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.167466  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:27.179061  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:27.179139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:27.210380  527777 cri.go:89] found id: ""
	I1201 21:16:27.210394  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.210402  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:27.210409  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:27.210474  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:27.238732  527777 cri.go:89] found id: ""
	I1201 21:16:27.238747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.238754  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:27.238760  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:27.238827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:27.265636  527777 cri.go:89] found id: ""
	I1201 21:16:27.265652  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.265661  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:27.265667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:27.265736  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:27.292213  527777 cri.go:89] found id: ""
	I1201 21:16:27.292228  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.292235  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:27.292241  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:27.292300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:27.324732  527777 cri.go:89] found id: ""
	I1201 21:16:27.324747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.324755  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:27.324762  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:27.324827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:27.352484  527777 cri.go:89] found id: ""
	I1201 21:16:27.352499  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.352507  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:27.352513  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:27.352590  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:27.384113  527777 cri.go:89] found id: ""
	I1201 21:16:27.384128  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.384136  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:27.384144  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:27.384155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:27.415615  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:27.415634  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.482296  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:27.482319  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:27.498829  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:27.498846  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:27.569732  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:27.569744  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:27.569757  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.145371  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:30.156840  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:30.156922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:30.184704  527777 cri.go:89] found id: ""
	I1201 21:16:30.184719  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.184727  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:30.184733  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:30.184795  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:30.213086  527777 cri.go:89] found id: ""
	I1201 21:16:30.213110  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.213120  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:30.213125  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:30.213192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:30.245472  527777 cri.go:89] found id: ""
	I1201 21:16:30.245486  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.245494  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:30.245499  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:30.245565  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:30.273463  527777 cri.go:89] found id: ""
	I1201 21:16:30.273477  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.273485  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:30.273491  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:30.273557  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:30.302141  527777 cri.go:89] found id: ""
	I1201 21:16:30.302156  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.302164  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:30.302170  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:30.302232  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:30.329744  527777 cri.go:89] found id: ""
	I1201 21:16:30.329758  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.329765  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:30.329771  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:30.329833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:30.356049  527777 cri.go:89] found id: ""
	I1201 21:16:30.356063  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.356071  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:30.356079  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:30.356110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:30.424124  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:30.424134  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:30.424145  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.498989  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:30.499009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:30.536189  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:30.536208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:30.601111  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:30.601130  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.116248  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:33.129790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:33.129876  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:33.162072  527777 cri.go:89] found id: ""
	I1201 21:16:33.162085  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.162093  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:33.162098  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:33.162168  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:33.188853  527777 cri.go:89] found id: ""
	I1201 21:16:33.188868  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.188875  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:33.188881  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:33.188944  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:33.215527  527777 cri.go:89] found id: ""
	I1201 21:16:33.215541  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.215548  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:33.215554  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:33.215613  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:33.241336  527777 cri.go:89] found id: ""
	I1201 21:16:33.241350  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.241357  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:33.241363  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:33.241422  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:33.267551  527777 cri.go:89] found id: ""
	I1201 21:16:33.267564  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.267571  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:33.267576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:33.267639  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:33.293257  527777 cri.go:89] found id: ""
	I1201 21:16:33.293273  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.293280  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:33.293286  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:33.293346  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:33.324702  527777 cri.go:89] found id: ""
	I1201 21:16:33.324717  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.324725  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:33.324733  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:33.324745  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:33.393448  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:33.393473  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.409048  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:33.409075  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:33.473709  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:33.473720  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:33.473731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:33.549174  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:33.549194  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:36.083124  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:36.093860  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:36.093919  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:36.122911  527777 cri.go:89] found id: ""
	I1201 21:16:36.122925  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.122932  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:36.122938  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:36.123000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:36.148002  527777 cri.go:89] found id: ""
	I1201 21:16:36.148016  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.148023  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:36.148028  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:36.148088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:36.173008  527777 cri.go:89] found id: ""
	I1201 21:16:36.173022  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.173029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:36.173034  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:36.173092  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:36.198828  527777 cri.go:89] found id: ""
	I1201 21:16:36.198841  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.198848  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:36.198854  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:36.198909  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:36.224001  527777 cri.go:89] found id: ""
	I1201 21:16:36.224015  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.224022  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:36.224027  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:36.224085  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:36.249054  527777 cri.go:89] found id: ""
	I1201 21:16:36.249068  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.249075  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:36.249080  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:36.249140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:36.273000  527777 cri.go:89] found id: ""
	I1201 21:16:36.273014  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.273021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:36.273029  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:36.273039  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:36.337502  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:36.337521  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:36.353315  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:36.353331  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:36.424612  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:36.424623  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:36.424633  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:36.503070  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:36.503100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:39.034568  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:39.045696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:39.045760  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:39.071542  527777 cri.go:89] found id: ""
	I1201 21:16:39.071555  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.071563  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:39.071569  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:39.071630  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:39.102301  527777 cri.go:89] found id: ""
	I1201 21:16:39.102315  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.102322  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:39.102328  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:39.102384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:39.129808  527777 cri.go:89] found id: ""
	I1201 21:16:39.129823  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.129830  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:39.129836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:39.129895  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:39.155555  527777 cri.go:89] found id: ""
	I1201 21:16:39.155569  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.155576  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:39.155582  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:39.155650  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:39.186394  527777 cri.go:89] found id: ""
	I1201 21:16:39.186408  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.186415  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:39.186420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:39.186485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:39.213875  527777 cri.go:89] found id: ""
	I1201 21:16:39.213889  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.213896  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:39.213901  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:39.213957  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:39.243609  527777 cri.go:89] found id: ""
	I1201 21:16:39.243623  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.243631  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:39.243640  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:39.243652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:39.307878  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:39.307897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:39.322972  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:39.322989  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:39.391843  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:39.391853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:39.391869  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:39.471894  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:39.471915  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.007008  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:42.029520  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:42.029588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:42.057505  527777 cri.go:89] found id: ""
	I1201 21:16:42.057520  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.057528  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:42.057534  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:42.057598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:42.097060  527777 cri.go:89] found id: ""
	I1201 21:16:42.097086  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.097094  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:42.097100  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:42.097191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:42.136029  527777 cri.go:89] found id: ""
	I1201 21:16:42.136048  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.136058  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:42.136064  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:42.136155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:42.183711  527777 cri.go:89] found id: ""
	I1201 21:16:42.183733  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.183743  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:42.183750  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:42.183825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:42.219282  527777 cri.go:89] found id: ""
	I1201 21:16:42.219298  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.219320  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:42.219326  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:42.219393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:42.248969  527777 cri.go:89] found id: ""
	I1201 21:16:42.248986  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.248994  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:42.249005  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:42.249079  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:42.283438  527777 cri.go:89] found id: ""
	I1201 21:16:42.283452  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.283459  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:42.283467  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:42.283479  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:42.355657  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:42.355675  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:42.355686  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:42.432138  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:42.432158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.466460  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:42.466475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:42.532633  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:42.532653  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.050487  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:45.077310  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:45.077404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:45.125431  527777 cri.go:89] found id: ""
	I1201 21:16:45.125455  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.125463  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:45.125469  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:45.125541  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:45.159113  527777 cri.go:89] found id: ""
	I1201 21:16:45.159151  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.159161  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:45.159167  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:45.159238  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:45.205059  527777 cri.go:89] found id: ""
	I1201 21:16:45.205075  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.205084  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:45.205092  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:45.205213  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:45.256952  527777 cri.go:89] found id: ""
	I1201 21:16:45.257035  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.257044  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:45.257051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:45.257244  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:45.299953  527777 cri.go:89] found id: ""
	I1201 21:16:45.299967  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.299975  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:45.299981  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:45.300047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:45.334546  527777 cri.go:89] found id: ""
	I1201 21:16:45.334562  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.334570  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:45.334576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:45.334641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:45.366922  527777 cri.go:89] found id: ""
	I1201 21:16:45.366936  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.366944  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:45.366952  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:45.366973  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.384985  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:45.385003  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:45.455424  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:45.455434  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:45.455446  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:45.532668  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:45.532689  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:45.572075  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:45.572092  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.147493  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:48.158252  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:48.158331  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:48.185671  527777 cri.go:89] found id: ""
	I1201 21:16:48.185685  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.185692  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:48.185697  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:48.185766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:48.211977  527777 cri.go:89] found id: ""
	I1201 21:16:48.211991  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.211998  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:48.212003  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:48.212059  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:48.238605  527777 cri.go:89] found id: ""
	I1201 21:16:48.238620  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.238627  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:48.238632  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:48.238691  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:48.272407  527777 cri.go:89] found id: ""
	I1201 21:16:48.272421  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.272428  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:48.272433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:48.272491  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:48.300451  527777 cri.go:89] found id: ""
	I1201 21:16:48.300465  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.300472  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:48.300478  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:48.300543  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:48.326518  527777 cri.go:89] found id: ""
	I1201 21:16:48.326542  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.326550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:48.326555  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:48.326629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:48.353027  527777 cri.go:89] found id: ""
	I1201 21:16:48.353043  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.353050  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:48.353059  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:48.353070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.418908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:48.418928  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:48.435338  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:48.435358  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:48.502670  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:48.502708  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:48.502718  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:48.579198  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:48.579219  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.111632  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:51.122895  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:51.122970  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:51.149845  527777 cri.go:89] found id: ""
	I1201 21:16:51.149859  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.149867  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:51.149872  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:51.149937  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:51.182385  527777 cri.go:89] found id: ""
	I1201 21:16:51.182399  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.182406  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:51.182411  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:51.182473  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:51.207954  527777 cri.go:89] found id: ""
	I1201 21:16:51.207967  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.208015  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:51.208024  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:51.208080  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:51.233058  527777 cri.go:89] found id: ""
	I1201 21:16:51.233071  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.233077  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:51.233083  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:51.233146  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:51.259105  527777 cri.go:89] found id: ""
	I1201 21:16:51.259119  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.259127  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:51.259147  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:51.259205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:51.284870  527777 cri.go:89] found id: ""
	I1201 21:16:51.284884  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.284891  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:51.284896  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:51.284953  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:51.312084  527777 cri.go:89] found id: ""
	I1201 21:16:51.312099  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.312106  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:51.312115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:51.312126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.342115  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:51.342134  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:51.408816  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:51.408836  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:51.425032  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:51.425054  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:51.494088  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:51.494097  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:51.494107  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.070393  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:54.082393  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:54.082464  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:54.112007  527777 cri.go:89] found id: ""
	I1201 21:16:54.112033  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.112041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:54.112048  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:54.112120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:54.142629  527777 cri.go:89] found id: ""
	I1201 21:16:54.142643  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.142650  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:54.142656  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:54.142715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:54.170596  527777 cri.go:89] found id: ""
	I1201 21:16:54.170611  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.170618  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:54.170623  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:54.170685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:54.199276  527777 cri.go:89] found id: ""
	I1201 21:16:54.199301  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.199309  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:54.199314  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:54.199385  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:54.229268  527777 cri.go:89] found id: ""
	I1201 21:16:54.229285  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.229294  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:54.229300  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:54.229378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:54.261273  527777 cri.go:89] found id: ""
	I1201 21:16:54.261289  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.261298  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:54.261306  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:54.261409  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:54.289154  527777 cri.go:89] found id: ""
	I1201 21:16:54.289169  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.289189  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:54.289199  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:54.289216  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:54.363048  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:54.363059  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:54.363070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.440875  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:54.440897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:54.471338  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:54.471355  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:54.543810  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:54.543830  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.061388  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:57.071929  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:57.071998  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:57.102516  527777 cri.go:89] found id: ""
	I1201 21:16:57.102531  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.102540  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:57.102546  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:57.102614  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:57.129734  527777 cri.go:89] found id: ""
	I1201 21:16:57.129749  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.129756  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:57.129761  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:57.129825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:57.160948  527777 cri.go:89] found id: ""
	I1201 21:16:57.160962  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.160971  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:57.160977  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:57.161049  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:57.192059  527777 cri.go:89] found id: ""
	I1201 21:16:57.192075  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.192082  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:57.192088  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:57.192155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:57.217906  527777 cri.go:89] found id: ""
	I1201 21:16:57.217920  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.217927  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:57.217932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:57.217992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:57.246391  527777 cri.go:89] found id: ""
	I1201 21:16:57.246406  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.246414  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:57.246420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:57.246480  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:57.273534  527777 cri.go:89] found id: ""
	I1201 21:16:57.273558  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.273565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:57.273573  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:57.273585  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:57.338589  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:57.338609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.354225  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:57.354241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:57.425192  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:57.425202  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:57.425213  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:57.501690  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:57.501713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:00.031846  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:00.071974  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:00.072071  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:00.158888  527777 cri.go:89] found id: ""
	I1201 21:17:00.158904  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.158912  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:00.158918  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:00.158994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:00.267283  527777 cri.go:89] found id: ""
	I1201 21:17:00.267299  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.267306  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:00.267312  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:00.267395  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:00.331710  527777 cri.go:89] found id: ""
	I1201 21:17:00.331725  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.331733  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:00.331740  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:00.331821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:00.416435  527777 cri.go:89] found id: ""
	I1201 21:17:00.416468  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.416476  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:00.416482  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:00.416566  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:00.456878  527777 cri.go:89] found id: ""
	I1201 21:17:00.456894  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.456904  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:00.456909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:00.456979  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:00.511096  527777 cri.go:89] found id: ""
	I1201 21:17:00.511113  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.511122  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:00.511166  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:00.511245  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:00.565444  527777 cri.go:89] found id: ""
	I1201 21:17:00.565463  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.565471  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:00.565480  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:00.565498  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:00.641086  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:00.641121  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:00.662045  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:00.662064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:00.750234  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:00.750246  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:00.750258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:00.828511  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:00.828539  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:03.366405  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:03.379053  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:03.379127  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:03.412977  527777 cri.go:89] found id: ""
	I1201 21:17:03.412991  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.412999  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:03.413005  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:03.413074  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:03.442789  527777 cri.go:89] found id: ""
	I1201 21:17:03.442817  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.442827  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:03.442834  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:03.442956  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:03.472731  527777 cri.go:89] found id: ""
	I1201 21:17:03.472758  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.472767  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:03.472772  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:03.472843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:03.503719  527777 cri.go:89] found id: ""
	I1201 21:17:03.503735  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.503744  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:03.503751  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:03.503823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:03.533642  527777 cri.go:89] found id: ""
	I1201 21:17:03.533658  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.533665  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:03.533671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:03.533749  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:03.562889  527777 cri.go:89] found id: ""
	I1201 21:17:03.562908  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.562916  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:03.562922  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:03.563006  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:03.592257  527777 cri.go:89] found id: ""
	I1201 21:17:03.592275  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.592283  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:03.592291  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:03.592303  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:03.660263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:03.660282  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:03.683357  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:03.683375  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:03.765695  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:03.765707  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:03.765719  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:03.842543  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:03.842567  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.376185  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:06.387932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:06.388000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:06.417036  527777 cri.go:89] found id: ""
	I1201 21:17:06.417050  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.417058  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:06.417064  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:06.417125  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:06.447064  527777 cri.go:89] found id: ""
	I1201 21:17:06.447090  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.447098  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:06.447104  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:06.447207  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:06.476879  527777 cri.go:89] found id: ""
	I1201 21:17:06.476893  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.476900  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:06.476905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:06.476968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:06.506320  527777 cri.go:89] found id: ""
	I1201 21:17:06.506338  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.506346  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:06.506352  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:06.506419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:06.535420  527777 cri.go:89] found id: ""
	I1201 21:17:06.535443  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.535451  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:06.535458  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:06.535525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:06.563751  527777 cri.go:89] found id: ""
	I1201 21:17:06.563784  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.563792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:06.563798  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:06.563865  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:06.597779  527777 cri.go:89] found id: ""
	I1201 21:17:06.597795  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.597803  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:06.597811  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:06.597823  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:06.681458  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:06.681470  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:06.681482  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:06.778343  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:06.778369  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.812835  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:06.812854  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:06.886097  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:06.886123  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.404611  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:09.415307  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:09.415386  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:09.454145  527777 cri.go:89] found id: ""
	I1201 21:17:09.454159  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.454168  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:09.454174  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:09.454240  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:09.483869  527777 cri.go:89] found id: ""
	I1201 21:17:09.483885  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.483893  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:09.483899  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:09.483961  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:09.510637  527777 cri.go:89] found id: ""
	I1201 21:17:09.510650  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.510657  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:09.510662  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:09.510719  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:09.542823  527777 cri.go:89] found id: ""
	I1201 21:17:09.542837  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.542844  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:09.542849  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:09.542911  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:09.570165  527777 cri.go:89] found id: ""
	I1201 21:17:09.570184  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.570191  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:09.570196  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:09.570254  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:09.595630  527777 cri.go:89] found id: ""
	I1201 21:17:09.595645  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.595652  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:09.595658  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:09.595722  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:09.621205  527777 cri.go:89] found id: ""
	I1201 21:17:09.621219  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.621226  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:09.621234  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:09.621244  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:09.700160  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:09.700182  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:09.739401  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:09.739425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:09.809572  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:09.809594  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.828869  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:09.828886  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:09.920701  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.421012  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:12.432213  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:12.432287  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:12.459734  527777 cri.go:89] found id: ""
	I1201 21:17:12.459757  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.459765  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:12.459771  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:12.459840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:12.485671  527777 cri.go:89] found id: ""
	I1201 21:17:12.485685  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.485692  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:12.485698  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:12.485757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:12.511548  527777 cri.go:89] found id: ""
	I1201 21:17:12.511564  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.511572  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:12.511577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:12.511637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:12.542030  527777 cri.go:89] found id: ""
	I1201 21:17:12.542046  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.542053  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:12.542060  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:12.542120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:12.567661  527777 cri.go:89] found id: ""
	I1201 21:17:12.567675  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.567691  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:12.567696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:12.567766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:12.597625  527777 cri.go:89] found id: ""
	I1201 21:17:12.597640  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.597647  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:12.597653  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:12.597718  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:12.623694  527777 cri.go:89] found id: ""
	I1201 21:17:12.623708  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.623715  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:12.623722  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:12.623733  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:12.638757  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:12.638772  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:12.731591  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.731601  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:12.731612  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:12.808720  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:12.808739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:12.838448  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:12.838465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:15.411670  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:15.422227  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:15.422288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:15.449244  527777 cri.go:89] found id: ""
	I1201 21:17:15.449267  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.449275  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:15.449281  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:15.449351  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:15.475790  527777 cri.go:89] found id: ""
	I1201 21:17:15.475804  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.475812  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:15.475883  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:15.505030  527777 cri.go:89] found id: ""
	I1201 21:17:15.505044  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.505052  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:15.505057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:15.505121  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:15.535702  527777 cri.go:89] found id: ""
	I1201 21:17:15.535717  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.535726  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:15.535732  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:15.535802  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:15.561881  527777 cri.go:89] found id: ""
	I1201 21:17:15.561895  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.561903  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:15.561909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:15.561968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:15.589608  527777 cri.go:89] found id: ""
	I1201 21:17:15.589623  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.589631  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:15.589637  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:15.589704  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:15.617545  527777 cri.go:89] found id: ""
	I1201 21:17:15.617559  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.617565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:15.617573  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:15.617584  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:15.633049  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:15.633067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:15.719603  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:15.719617  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:15.719628  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:15.795783  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:15.795806  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:15.829611  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:15.829629  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.397343  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:18.407645  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:18.407707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:18.431992  527777 cri.go:89] found id: ""
	I1201 21:17:18.432013  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.432020  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:18.432025  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:18.432082  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:18.456900  527777 cri.go:89] found id: ""
	I1201 21:17:18.456914  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.456921  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:18.456927  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:18.456985  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:18.482130  527777 cri.go:89] found id: ""
	I1201 21:17:18.482144  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.482151  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:18.482156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:18.482216  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:18.506788  527777 cri.go:89] found id: ""
	I1201 21:17:18.506802  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.506809  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:18.506814  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:18.506880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:18.535015  527777 cri.go:89] found id: ""
	I1201 21:17:18.535029  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.535036  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:18.535041  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:18.535102  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:18.561266  527777 cri.go:89] found id: ""
	I1201 21:17:18.561281  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.561288  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:18.561294  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:18.561350  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:18.590006  527777 cri.go:89] found id: ""
	I1201 21:17:18.590020  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.590027  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:18.590034  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:18.590044  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.655626  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:18.655644  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:18.673142  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:18.673158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:18.755072  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:18.755084  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:18.755097  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:18.830997  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:18.831019  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:21.361828  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:21.372633  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:21.372693  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:21.397967  527777 cri.go:89] found id: ""
	I1201 21:17:21.397981  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.398009  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:21.398014  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:21.398083  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:21.424540  527777 cri.go:89] found id: ""
	I1201 21:17:21.424554  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.424570  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:21.424575  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:21.424644  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:21.450905  527777 cri.go:89] found id: ""
	I1201 21:17:21.450920  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.450948  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:21.450954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:21.451029  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:21.483885  527777 cri.go:89] found id: ""
	I1201 21:17:21.483899  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.483906  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:21.483911  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:21.483966  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:21.514135  527777 cri.go:89] found id: ""
	I1201 21:17:21.514149  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.514156  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:21.514162  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:21.514221  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:21.540203  527777 cri.go:89] found id: ""
	I1201 21:17:21.540217  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.540224  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:21.540229  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:21.540285  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:21.570752  527777 cri.go:89] found id: ""
	I1201 21:17:21.570765  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.570772  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:21.570780  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:21.570794  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:21.636631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:21.636651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:21.652498  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:21.652516  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:21.739586  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:21.739597  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:21.739609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:21.815773  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:21.815793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:24.351500  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:24.361669  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:24.361728  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:24.390941  527777 cri.go:89] found id: ""
	I1201 21:17:24.390955  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.390962  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:24.390968  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:24.391024  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:24.416426  527777 cri.go:89] found id: ""
	I1201 21:17:24.416440  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.416448  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:24.416453  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:24.416510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:24.443044  527777 cri.go:89] found id: ""
	I1201 21:17:24.443058  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.443065  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:24.443070  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:24.443182  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:24.468754  527777 cri.go:89] found id: ""
	I1201 21:17:24.468769  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.468776  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:24.468781  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:24.468840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:24.494385  527777 cri.go:89] found id: ""
	I1201 21:17:24.494399  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.494406  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:24.494416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:24.494477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:24.519676  527777 cri.go:89] found id: ""
	I1201 21:17:24.519689  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.519696  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:24.519702  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:24.519761  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:24.546000  527777 cri.go:89] found id: ""
	I1201 21:17:24.546014  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.546021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:24.546028  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:24.546041  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:24.611509  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:24.611529  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:24.626295  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:24.626324  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:24.702708  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:24.702719  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:24.702731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:24.784492  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:24.784514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.320817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:27.331542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:27.331602  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:27.357014  527777 cri.go:89] found id: ""
	I1201 21:17:27.357028  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.357035  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:27.357040  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:27.357098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:27.381792  527777 cri.go:89] found id: ""
	I1201 21:17:27.381806  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.381813  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:27.381818  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:27.381880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:27.407905  527777 cri.go:89] found id: ""
	I1201 21:17:27.407919  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.407927  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:27.407933  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:27.407994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:27.433511  527777 cri.go:89] found id: ""
	I1201 21:17:27.433526  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.433533  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:27.433539  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:27.433596  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:27.459609  527777 cri.go:89] found id: ""
	I1201 21:17:27.459622  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.459629  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:27.459635  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:27.459700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:27.487173  527777 cri.go:89] found id: ""
	I1201 21:17:27.487186  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.487193  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:27.487199  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:27.487257  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:27.512860  527777 cri.go:89] found id: ""
	I1201 21:17:27.512874  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.512881  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:27.512889  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:27.512901  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.541723  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:27.541739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:27.606990  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:27.607009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:27.622689  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:27.622705  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:27.700563  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:27.700573  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:27.700586  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.289250  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:30.300157  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:30.300217  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:30.327373  527777 cri.go:89] found id: ""
	I1201 21:17:30.327394  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.327405  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:30.327420  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:30.327492  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:30.353615  527777 cri.go:89] found id: ""
	I1201 21:17:30.353629  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.353636  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:30.353642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:30.353702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:30.385214  527777 cri.go:89] found id: ""
	I1201 21:17:30.385228  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.385235  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:30.385240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:30.385300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:30.415674  527777 cri.go:89] found id: ""
	I1201 21:17:30.415688  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.415695  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:30.415701  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:30.415767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:30.442641  527777 cri.go:89] found id: ""
	I1201 21:17:30.442656  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.442663  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:30.442668  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:30.442726  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:30.469997  527777 cri.go:89] found id: ""
	I1201 21:17:30.470010  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.470017  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:30.470023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:30.470081  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:30.495554  527777 cri.go:89] found id: ""
	I1201 21:17:30.495570  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.495579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:30.495587  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:30.495599  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:30.559878  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:30.559888  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:30.559899  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.635560  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:30.635581  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:30.673666  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:30.673682  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:30.747787  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:30.747808  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.264623  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:33.276366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:33.276427  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:33.306447  527777 cri.go:89] found id: ""
	I1201 21:17:33.306461  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.306473  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:33.306478  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:33.306538  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:33.334715  527777 cri.go:89] found id: ""
	I1201 21:17:33.334730  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.334738  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:33.334744  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:33.334814  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:33.365674  527777 cri.go:89] found id: ""
	I1201 21:17:33.365690  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.365698  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:33.365705  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:33.365774  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:33.396072  527777 cri.go:89] found id: ""
	I1201 21:17:33.396089  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.396096  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:33.396103  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:33.396175  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:33.429356  527777 cri.go:89] found id: ""
	I1201 21:17:33.429372  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.429381  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:33.429387  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:33.429461  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:33.457917  527777 cri.go:89] found id: ""
	I1201 21:17:33.457932  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.457941  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:33.457948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:33.458022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:33.490167  527777 cri.go:89] found id: ""
	I1201 21:17:33.490182  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.490190  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:33.490199  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:33.490212  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:33.558131  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:33.558155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.575080  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:33.575101  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:33.657808  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:33.657834  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:33.657848  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:33.754296  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:33.754323  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:36.289647  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:36.300774  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:36.300833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:36.327492  527777 cri.go:89] found id: ""
	I1201 21:17:36.327507  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.327514  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:36.327520  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:36.327583  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:36.359515  527777 cri.go:89] found id: ""
	I1201 21:17:36.359529  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.359537  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:36.359542  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:36.359606  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:36.387977  527777 cri.go:89] found id: ""
	I1201 21:17:36.387990  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.387997  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:36.388002  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:36.388058  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:36.413410  527777 cri.go:89] found id: ""
	I1201 21:17:36.413429  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.413436  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:36.413442  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:36.413499  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:36.440588  527777 cri.go:89] found id: ""
	I1201 21:17:36.440614  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.440622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:36.440627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:36.440698  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:36.471404  527777 cri.go:89] found id: ""
	I1201 21:17:36.471419  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.471427  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:36.471433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:36.471500  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:36.499502  527777 cri.go:89] found id: ""
	I1201 21:17:36.499518  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.499528  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:36.499536  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:36.499546  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:36.568027  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:36.568052  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:36.584561  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:36.584580  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:36.665718  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:36.665728  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:36.665740  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:36.748791  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:36.748812  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.285189  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:39.296369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:39.296438  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:39.323280  527777 cri.go:89] found id: ""
	I1201 21:17:39.323294  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.323306  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:39.323312  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:39.323379  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:39.352092  527777 cri.go:89] found id: ""
	I1201 21:17:39.352107  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.352115  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:39.352120  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:39.352187  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:39.379352  527777 cri.go:89] found id: ""
	I1201 21:17:39.379367  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.379375  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:39.379382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:39.379446  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:39.406925  527777 cri.go:89] found id: ""
	I1201 21:17:39.406940  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.406947  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:39.406954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:39.407022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:39.434427  527777 cri.go:89] found id: ""
	I1201 21:17:39.434442  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.434450  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:39.434455  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:39.434521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:39.466725  527777 cri.go:89] found id: ""
	I1201 21:17:39.466741  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.466748  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:39.466755  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:39.466821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:39.494952  527777 cri.go:89] found id: ""
	I1201 21:17:39.494968  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.494976  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:39.494985  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:39.494998  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:39.510984  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:39.511002  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:39.585968  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:39.585981  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:39.585993  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:39.669009  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:39.669033  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.705170  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:39.705189  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:42.275450  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:42.287572  527777 kubeadm.go:602] duration metric: took 4m1.888207918s to restartPrimaryControlPlane
	W1201 21:17:42.287658  527777 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1201 21:17:42.287747  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:17:42.711674  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:17:42.725511  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:17:42.734239  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:17:42.734308  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:17:42.743050  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:17:42.743060  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:17:42.743120  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:17:42.751678  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:17:42.751731  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:17:42.759481  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:17:42.767903  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:17:42.767964  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:17:42.776067  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.784283  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:17:42.784355  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.792582  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:17:42.801449  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:17:42.801518  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:17:42.809783  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:17:42.849635  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:17:42.849689  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:17:42.929073  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:17:42.929165  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:17:42.929199  527777 kubeadm.go:319] OS: Linux
	I1201 21:17:42.929243  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:17:42.929296  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:17:42.929342  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:17:42.929388  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:17:42.929435  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:17:42.929482  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:17:42.929526  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:17:42.929573  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:17:42.929617  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:17:43.002025  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:17:43.002165  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:17:43.002258  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:17:43.013458  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:17:43.017000  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:17:43.017095  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:17:43.017170  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:17:43.017252  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:17:43.017311  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:17:43.017379  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:17:43.017434  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:17:43.017501  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:17:43.017561  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:17:43.017634  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:17:43.017705  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:17:43.017832  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:17:43.017892  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:17:43.133992  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:17:43.467350  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:17:43.613021  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:17:43.910424  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:17:44.196121  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:17:44.196632  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:17:44.199145  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:17:44.202480  527777 out.go:252]   - Booting up control plane ...
	I1201 21:17:44.202575  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:17:44.202651  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:17:44.202718  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:17:44.217388  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:17:44.217714  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:17:44.228031  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:17:44.228400  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:17:44.228517  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:17:44.357408  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:17:44.357522  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:21:44.357404  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000240491s
	I1201 21:21:44.357429  527777 kubeadm.go:319] 
	I1201 21:21:44.357487  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:21:44.357523  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:21:44.357633  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:21:44.357637  527777 kubeadm.go:319] 
	I1201 21:21:44.357830  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:21:44.357863  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:21:44.357893  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:21:44.357896  527777 kubeadm.go:319] 
	I1201 21:21:44.361511  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.361943  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:44.362051  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:21:44.362287  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:21:44.362292  527777 kubeadm.go:319] 
	I1201 21:21:44.362361  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1201 21:21:44.362491  527777 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240491s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1201 21:21:44.362579  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:21:44.772977  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:21:44.786214  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:21:44.786270  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:21:44.794556  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:21:44.794568  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:21:44.794622  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:21:44.803048  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:21:44.803106  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:21:44.810695  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:21:44.818882  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:21:44.818947  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:21:44.827077  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.834936  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:21:44.834995  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.843074  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:21:44.851084  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:21:44.851166  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:21:44.858721  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:21:44.981319  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.981788  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:45.157392  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:25:46.243317  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:25:46.243344  527777 kubeadm.go:319] 
	I1201 21:25:46.243413  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 21:25:46.246817  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:25:46.246871  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:25:46.246962  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:25:46.247022  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:25:46.247057  527777 kubeadm.go:319] OS: Linux
	I1201 21:25:46.247100  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:25:46.247175  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:25:46.247246  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:25:46.247312  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:25:46.247369  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:25:46.247421  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:25:46.247464  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:25:46.247511  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:25:46.247555  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:25:46.247626  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:25:46.247719  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:25:46.247811  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:25:46.247872  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:25:46.250950  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:25:46.251041  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:25:46.251105  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:25:46.251224  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:25:46.251290  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:25:46.251369  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:25:46.251431  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:25:46.251495  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:25:46.251555  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:25:46.251629  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:25:46.251704  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:25:46.251741  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:25:46.251795  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:25:46.251845  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:25:46.251899  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:25:46.251951  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:25:46.252012  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:25:46.252065  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:25:46.252149  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:25:46.252213  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:25:46.255065  527777 out.go:252]   - Booting up control plane ...
	I1201 21:25:46.255213  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:25:46.255292  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:25:46.255359  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:25:46.255466  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:25:46.255590  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:25:46.255713  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:25:46.255816  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:25:46.255856  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:25:46.256011  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:25:46.256134  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:25:46.256200  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000272278s
	I1201 21:25:46.256203  527777 kubeadm.go:319] 
	I1201 21:25:46.256259  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:25:46.256290  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:25:46.256400  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:25:46.256404  527777 kubeadm.go:319] 
	I1201 21:25:46.256508  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:25:46.256540  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:25:46.256569  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:25:46.256592  527777 kubeadm.go:319] 
	I1201 21:25:46.256631  527777 kubeadm.go:403] duration metric: took 12m5.895739008s to StartCluster
	I1201 21:25:46.256661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:25:46.256721  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:25:46.286008  527777 cri.go:89] found id: ""
	I1201 21:25:46.286022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.286029  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:25:46.286034  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:25:46.286096  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:25:46.311936  527777 cri.go:89] found id: ""
	I1201 21:25:46.311950  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.311957  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:25:46.311963  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:25:46.312022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:25:46.338008  527777 cri.go:89] found id: ""
	I1201 21:25:46.338022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.338029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:25:46.338035  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:25:46.338094  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:25:46.364430  527777 cri.go:89] found id: ""
	I1201 21:25:46.364446  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.364453  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:25:46.364459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:25:46.364519  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:25:46.390553  527777 cri.go:89] found id: ""
	I1201 21:25:46.390568  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.390574  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:25:46.390580  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:25:46.390638  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:25:46.416135  527777 cri.go:89] found id: ""
	I1201 21:25:46.416149  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.416156  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:25:46.416161  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:25:46.416215  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:25:46.441110  527777 cri.go:89] found id: ""
	I1201 21:25:46.441124  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.441131  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:25:46.441139  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:25:46.441160  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:25:46.456311  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:25:46.456328  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:25:46.535568  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:25:46.535579  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:25:46.535591  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:25:46.613336  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:25:46.613357  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:25:46.643384  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:25:46.643410  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1201 21:25:46.714793  527777 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1201 21:25:46.714844  527777 out.go:285] * 
	W1201 21:25:46.714913  527777 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.714940  527777 out.go:285] * 
	W1201 21:25:46.717121  527777 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:25:46.722121  527777 out.go:203] 
	W1201 21:25:46.725981  527777 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.726037  527777 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1201 21:25:46.726060  527777 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1201 21:25:46.729457  527777 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028303365Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028345423Z" level=info msg="Starting seccomp notifier watcher"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028394488Z" level=info msg="Create NRI interface"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028507921Z" level=info msg="built-in NRI default validator is disabled"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028517906Z" level=info msg="runtime interface created"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028533045Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.02854001Z" level=info msg="runtime interface starting up..."
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028547362Z" level=info msg="starting plugins..."
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028562434Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028635598Z" level=info msg="No systemd watchdog enabled"
	Dec 01 21:13:39 functional-198694 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.006897207Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6899020c-e81d-4ca2-b78d-1b19ba925f8d name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.008172907Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=85515e67-9e24-4eed-9690-db5bbe0ab759 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.009097715Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c8e368b2-0181-4d6d-8bf3-4e28d45c02c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.009733916Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0fc0011a-c9a7-42d2-a5b8-995e0a543565 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.010282103Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=345cda63-6b08-4110-81b2-46c3bae48473 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.010980374Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=5390fbb3-60dc-4145-9a48-c3c46e1b2cb6 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.011663876Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=6ae96308-5839-4062-8cab-2394de4e389c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.162637929Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d0f240a5-2441-4e93-9b4a-f3d4bd7ad9c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.164075956Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0616d176-adc2-492a-ae1c-f0f024bafeaf name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.164807688Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=b908770b-6817-4806-aa77-5607a1538338 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.167796208Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3df13166-7c04-4daf-93af-7c9be539fdad name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.168806661Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=16a3784c-0cb1-4a72-824d-e721ee5352ce name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.16959947Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=de159f67-5257-40aa-8e51-ddafe4c8e78c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.170614172Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1626a35a-f287-41b9-b7fb-8a0f7945ff57 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:25:50.179708   21782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:50.180128   21782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:50.181813   21782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:50.182576   21782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:50.184716   21782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:25:50 up  3:08,  0 user,  load average: 0.30, 0.23, 0.41
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:25:47 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:25:48 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 643.
	Dec 01 21:25:48 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:48 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:48 functional-198694 kubelet[21654]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:48 functional-198694 kubelet[21654]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:48 functional-198694 kubelet[21654]: E1201 21:25:48.474040   21654 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:25:48 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:25:48 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:25:49 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 644.
	Dec 01 21:25:49 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:49 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:49 functional-198694 kubelet[21676]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:49 functional-198694 kubelet[21676]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:49 functional-198694 kubelet[21676]: E1201 21:25:49.209677   21676 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:25:49 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:25:49 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:25:49 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Dec 01 21:25:49 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:49 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:49 functional-198694 kubelet[21725]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:49 functional-198694 kubelet[21725]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:49 functional-198694 kubelet[21725]: E1201 21:25:49.962976   21725 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:25:49 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:25:49 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (348.497901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-198694 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-198694 apply -f testdata/invalidsvc.yaml: exit status 1 (59.126813ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-198694 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-198694 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-198694 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-198694 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-198694 --alsologtostderr -v=1] stderr:
I1201 21:27:58.337291  546449 out.go:360] Setting OutFile to fd 1 ...
I1201 21:27:58.337437  546449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:27:58.337448  546449 out.go:374] Setting ErrFile to fd 2...
I1201 21:27:58.337454  546449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:27:58.337701  546449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 21:27:58.337974  546449 mustload.go:66] Loading cluster: functional-198694
I1201 21:27:58.338418  546449 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:27:58.339020  546449 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
I1201 21:27:58.356549  546449 host.go:66] Checking if "functional-198694" exists ...
I1201 21:27:58.356873  546449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1201 21:27:58.418437  546449 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:27:58.40881821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1201 21:27:58.418569  546449 api_server.go:166] Checking apiserver status ...
I1201 21:27:58.418644  546449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1201 21:27:58.418691  546449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
I1201 21:27:58.436144  546449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
W1201 21:27:58.546230  546449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1201 21:27:58.549552  546449 out.go:179] * The control-plane node functional-198694 apiserver is not running: (state=Stopped)
I1201 21:27:58.552434  546449 out.go:179]   To start a cluster, run: "minikube start -p functional-198694"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (336.456338ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons    │ functional-198694 addons list                                                                                                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ addons    │ functional-198694 addons list -o json                                                                                                               │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ mount     │ -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001:/mount-9p --alsologtostderr -v=1              │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh       │ functional-198694 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh       │ functional-198694 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ ssh       │ functional-198694 ssh -- ls -la /mount-9p                                                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ ssh       │ functional-198694 ssh cat /mount-9p/test-1764624471985109686                                                                                        │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ ssh       │ functional-198694 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh       │ functional-198694 ssh sudo umount -f /mount-9p                                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ mount     │ -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1645726952/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh       │ functional-198694 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh       │ functional-198694 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ ssh       │ functional-198694 ssh -- ls -la /mount-9p                                                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ ssh       │ functional-198694 ssh sudo umount -f /mount-9p                                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ mount     │ -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount1 --alsologtostderr -v=1                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ mount     │ -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount2 --alsologtostderr -v=1                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh       │ functional-198694 ssh findmnt -T /mount1                                                                                                            │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ mount     │ -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount3 --alsologtostderr -v=1                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh       │ functional-198694 ssh findmnt -T /mount2                                                                                                            │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ ssh       │ functional-198694 ssh findmnt -T /mount3                                                                                                            │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ mount     │ -p functional-198694 --kill=true                                                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ start     │ -p functional-198694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ start     │ -p functional-198694 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ start     │ -p functional-198694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-198694 --alsologtostderr -v=1                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:27:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:27:58.101372  546398 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:27:58.101548  546398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:27:58.101555  546398 out.go:374] Setting ErrFile to fd 2...
	I1201 21:27:58.101561  546398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:27:58.101999  546398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:27:58.102413  546398 out.go:368] Setting JSON to false
	I1201 21:27:58.103413  546398 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11428,"bootTime":1764613051,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:27:58.103489  546398 start.go:143] virtualization:  
	I1201 21:27:58.106728  546398 out.go:179] * [functional-198694] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1201 21:27:58.110431  546398 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:27:58.110606  546398 notify.go:221] Checking for updates...
	I1201 21:27:58.116551  546398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:27:58.119475  546398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:27:58.122388  546398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:27:58.125369  546398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:27:58.128308  546398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:27:58.131852  546398 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:27:58.132449  546398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:27:58.170277  546398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:27:58.170455  546398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:27:58.267551  546398 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:27:58.257057975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:27:58.267677  546398 docker.go:319] overlay module found
	I1201 21:27:58.271024  546398 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1201 21:27:58.274017  546398 start.go:309] selected driver: docker
	I1201 21:27:58.274047  546398 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:27:58.274175  546398 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:27:58.277846  546398 out.go:203] 
	W1201 21:27:58.280883  546398 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1201 21:27:58.283947  546398 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:25:55 functional-198694 crio[10476]: time="2025-12-01T21:25:55.958248239Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=60b0690d-119a-4b74-971b-527f5644551b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002277819Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002458196Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002508862Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.001788017Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=1eef1f83-f41d-4072-8efe-21875777fc46 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038212447Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038444368Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038497101Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.071982198Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.072139634Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.072179354Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.449575902Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=9c839e7f-fb5f-4968-a4bc-4d98c332783b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.503799057Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.504091628Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.504215777Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534112776Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534244342Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534283193Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.532335101Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=cd62b92b-638f-4a1a-ae2d-ff287a877bee name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56735868Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56751706Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56755705Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.61237743Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.612552228Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.612623471Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:27:59.625934   24390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:59.626496   24390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:59.627946   24390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:59.628277   24390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:59.629769   24390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:27:59 up  3:10,  0 user,  load average: 0.98, 0.50, 0.49
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:27:57 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:57 functional-198694 kubelet[24265]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:57 functional-198694 kubelet[24265]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:57 functional-198694 kubelet[24265]: E1201 21:27:57.468287   24265 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:27:57 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:27:57 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:27:58 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 816.
	Dec 01 21:27:58 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:58 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:58 functional-198694 kubelet[24278]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:58 functional-198694 kubelet[24278]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:58 functional-198694 kubelet[24278]: E1201 21:27:58.240119   24278 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:27:58 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:27:58 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:27:58 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 817.
	Dec 01 21:27:58 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:58 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:58 functional-198694 kubelet[24300]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:58 functional-198694 kubelet[24300]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:58 functional-198694 kubelet[24300]: E1201 21:27:58.968021   24300 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:27:58 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:27:58 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:27:59 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 818.
	Dec 01 21:27:59 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:59 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (414.814124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 status: exit status 2 (341.729344ms)

                                                
                                                
-- stdout --
	functional-198694
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-198694 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (328.963522ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-198694 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 status -o json: exit status 2 (337.984204ms)

                                                
                                                
-- stdout --
	{"Name":"functional-198694","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-198694 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (312.070914ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-198694 ssh sudo cat /etc/ssl/certs/486002.pem                                                                                                  │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ image   │ functional-198694 image ls                                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /usr/share/ca-certificates/486002.pem                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ image   │ functional-198694 image save kicbase/echo-server:functional-198694 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ image   │ functional-198694 image rm kicbase/echo-server:functional-198694 --alsologtostderr                                                                        │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /etc/ssl/certs/4860022.pem                                                                                                 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ image   │ functional-198694 image ls                                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /usr/share/ca-certificates/4860022.pem                                                                                     │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:26 UTC │
	│ image   │ functional-198694 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:26 UTC │
	│ image   │ functional-198694 image save --daemon kicbase/echo-server:functional-198694 --alsologtostderr                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /etc/test/nested/copy/486002/hosts                                                                                         │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ service │ functional-198694 service list                                                                                                                            │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ ssh     │ functional-198694 ssh echo hello                                                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ service │ functional-198694 service list -o json                                                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ ssh     │ functional-198694 ssh cat /etc/hostname                                                                                                                   │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ service │ functional-198694 service --namespace=default --https --url hello-node                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ tunnel  │ functional-198694 tunnel --alsologtostderr                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ tunnel  │ functional-198694 tunnel --alsologtostderr                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ service │ functional-198694 service hello-node --url --format={{.IP}}                                                                                               │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ service │ functional-198694 service hello-node --url                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ tunnel  │ functional-198694 tunnel --alsologtostderr                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ addons  │ functional-198694 addons list                                                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ addons  │ functional-198694 addons list -o json                                                                                                                     │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:13:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:13:35.338314  527777 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:13:35.338426  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.338431  527777 out.go:374] Setting ErrFile to fd 2...
	I1201 21:13:35.338435  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.339011  527777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:13:35.339669  527777 out.go:368] Setting JSON to false
	I1201 21:13:35.340628  527777 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10565,"bootTime":1764613051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:13:35.340767  527777 start.go:143] virtualization:  
	I1201 21:13:35.344231  527777 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:13:35.348003  527777 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:13:35.348182  527777 notify.go:221] Checking for updates...
	I1201 21:13:35.353585  527777 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:13:35.356421  527777 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:13:35.359084  527777 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:13:35.361859  527777 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:13:35.364606  527777 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:13:35.367906  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:35.368004  527777 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:13:35.404299  527777 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:13:35.404422  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.463515  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.453981974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.463609  527777 docker.go:319] overlay module found
	I1201 21:13:35.466875  527777 out.go:179] * Using the docker driver based on existing profile
	I1201 21:13:35.469781  527777 start.go:309] selected driver: docker
	I1201 21:13:35.469793  527777 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.469882  527777 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:13:35.469988  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.530406  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.520549629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.530815  527777 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 21:13:35.530841  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:35.530897  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:35.530938  527777 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.534086  527777 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:13:35.536995  527777 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:13:35.539929  527777 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:13:35.542786  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:35.542873  527777 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:13:35.563189  527777 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:13:35.563200  527777 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:13:35.608993  527777 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:13:35.806403  527777 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:13:35.806571  527777 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:13:35.806600  527777 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806692  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:13:35.806702  527777 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.653µs
	I1201 21:13:35.806710  527777 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:13:35.806721  527777 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806753  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:13:35.806758  527777 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 38.825µs
	I1201 21:13:35.806764  527777 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806774  527777 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806815  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:13:35.806831  527777 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 48.901µs
	I1201 21:13:35.806838  527777 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806850  527777 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:13:35.806851  527777 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806885  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:13:35.806880  527777 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806893  527777 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 44.405µs
	I1201 21:13:35.806899  527777 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806914  527777 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806939  527777 start.go:364] duration metric: took 38.547µs to acquireMachinesLock for "functional-198694"
	I1201 21:13:35.806944  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:13:35.806949  527777 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 42.124µs
	I1201 21:13:35.806954  527777 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806962  527777 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:13:35.806968  527777 fix.go:54] fixHost starting: 
	I1201 21:13:35.806963  527777 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806991  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:13:35.806995  527777 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.558µs
	I1201 21:13:35.807007  527777 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:13:35.807016  527777 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807045  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:13:35.807049  527777 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.657µs
	I1201 21:13:35.807054  527777 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:13:35.807062  527777 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807089  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:13:35.807094  527777 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.54µs
	I1201 21:13:35.807099  527777 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:13:35.807107  527777 cache.go:87] Successfully saved all images to host disk.
	I1201 21:13:35.807314  527777 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:13:35.826290  527777 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:13:35.826315  527777 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:13:35.829729  527777 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:13:35.829761  527777 machine.go:94] provisionDockerMachine start ...
	I1201 21:13:35.829853  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:35.849270  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:35.849646  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:35.849655  527777 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:13:36.014195  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.014211  527777 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:13:36.014280  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.035339  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.035672  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.035681  527777 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:13:36.197202  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.197287  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.217632  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.217935  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.217948  527777 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:13:36.367610  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:13:36.367629  527777 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:13:36.367658  527777 ubuntu.go:190] setting up certificates
	I1201 21:13:36.367666  527777 provision.go:84] configureAuth start
	I1201 21:13:36.367747  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:36.387555  527777 provision.go:143] copyHostCerts
	I1201 21:13:36.387627  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:13:36.387641  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:13:36.387724  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:13:36.387835  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:13:36.387840  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:13:36.387866  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:13:36.387928  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:13:36.387933  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:13:36.387959  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:13:36.388014  527777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:13:36.864413  527777 provision.go:177] copyRemoteCerts
	I1201 21:13:36.864488  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:13:36.864542  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.883147  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:36.987572  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:13:37.015924  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:13:37.037590  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 21:13:37.056483  527777 provision.go:87] duration metric: took 688.787749ms to configureAuth
	I1201 21:13:37.056502  527777 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:13:37.056696  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:37.056802  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.075104  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:37.075454  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:37.075468  527777 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:13:37.432424  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:13:37.432439  527777 machine.go:97] duration metric: took 1.602671146s to provisionDockerMachine
	I1201 21:13:37.432451  527777 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:13:37.432466  527777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:13:37.432544  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:13:37.432606  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.457485  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.563609  527777 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:13:37.567292  527777 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:13:37.567310  527777 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:13:37.567329  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:13:37.567430  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:13:37.567517  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:13:37.567613  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:13:37.567670  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:13:37.575725  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:37.593481  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:13:37.611620  527777 start.go:296] duration metric: took 179.151488ms for postStartSetup
	I1201 21:13:37.611718  527777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:13:37.611798  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.629587  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.732362  527777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:13:37.737388  527777 fix.go:56] duration metric: took 1.930412863s for fixHost
	I1201 21:13:37.737414  527777 start.go:83] releasing machines lock for "functional-198694", held for 1.930466515s
	I1201 21:13:37.737492  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:37.754641  527777 ssh_runner.go:195] Run: cat /version.json
	I1201 21:13:37.754685  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.754954  527777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:13:37.755010  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.773486  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.787845  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.875124  527777 ssh_runner.go:195] Run: systemctl --version
	I1201 21:13:37.974016  527777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:13:38.017000  527777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 21:13:38.021875  527777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:13:38.021957  527777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:13:38.031594  527777 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:13:38.031622  527777 start.go:496] detecting cgroup driver to use...
	I1201 21:13:38.031660  527777 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:13:38.031747  527777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:13:38.049187  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:13:38.064637  527777 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:13:38.064721  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:13:38.083239  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:13:38.097453  527777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:13:38.249215  527777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:13:38.371691  527777 docker.go:234] disabling docker service ...
	I1201 21:13:38.371769  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:13:38.388782  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:13:38.402306  527777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:13:38.513914  527777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:13:38.630153  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:13:38.644475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:13:38.658966  527777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:13:38.659023  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.668135  527777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:13:38.668192  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.677509  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.686682  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.695781  527777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:13:38.704147  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.713420  527777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.722196  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.731481  527777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:13:38.740144  527777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:13:38.748176  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:38.858298  527777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:13:39.035375  527777 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:13:39.035464  527777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:13:39.039668  527777 start.go:564] Will wait 60s for crictl version
	I1201 21:13:39.039730  527777 ssh_runner.go:195] Run: which crictl
	I1201 21:13:39.043260  527777 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:13:39.078386  527777 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:13:39.078499  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.110667  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.146750  527777 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:13:39.149800  527777 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:13:39.166717  527777 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:13:39.173972  527777 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1201 21:13:39.176755  527777 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:13:39.176898  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:39.176968  527777 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:13:39.210945  527777 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:13:39.210958  527777 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:13:39.210965  527777 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:13:39.211070  527777 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:13:39.211187  527777 ssh_runner.go:195] Run: crio config
	I1201 21:13:39.284437  527777 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1201 21:13:39.284481  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:39.284491  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:39.284499  527777 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:13:39.284522  527777 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:13:39.284675  527777 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:13:39.284759  527777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:13:39.293198  527777 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:13:39.293275  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:13:39.301290  527777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:13:39.315108  527777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:13:39.329814  527777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1201 21:13:39.343669  527777 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:13:39.347900  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:39.461077  527777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:13:39.654352  527777 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:13:39.654364  527777 certs.go:195] generating shared ca certs ...
	I1201 21:13:39.654379  527777 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:13:39.654515  527777 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:13:39.654555  527777 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:13:39.654570  527777 certs.go:257] generating profile certs ...
	I1201 21:13:39.654666  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:13:39.654727  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:13:39.654771  527777 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:13:39.654890  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:13:39.654921  527777 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:13:39.654928  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:13:39.654965  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:13:39.655015  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:13:39.655038  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:13:39.655084  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:39.655762  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:13:39.683427  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:13:39.704542  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:13:39.724282  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:13:39.744046  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:13:39.765204  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:13:39.784677  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:13:39.803885  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:13:39.822965  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:13:39.842026  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:13:39.860451  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:13:39.879380  527777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:13:39.893847  527777 ssh_runner.go:195] Run: openssl version
	I1201 21:13:39.900456  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:13:39.910454  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914599  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914672  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.957573  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:13:39.966576  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:13:39.976178  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980649  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980729  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:13:40.025575  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:13:40.037195  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:13:40.047283  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051903  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051976  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.094396  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:13:40.103155  527777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:13:40.107392  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:13:40.150081  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:13:40.192825  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:13:40.234772  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:13:40.276722  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:13:40.318487  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:13:40.360912  527777 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:40.361001  527777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:13:40.361062  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.390972  527777 cri.go:89] found id: ""
	I1201 21:13:40.391046  527777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:13:40.399343  527777 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:13:40.399354  527777 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:13:40.399410  527777 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:13:40.407260  527777 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.407785  527777 kubeconfig.go:125] found "functional-198694" server: "https://192.168.49.2:8441"
	I1201 21:13:40.409130  527777 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:13:40.418081  527777 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-01 20:59:03.175067800 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-01 21:13:39.337074315 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1201 21:13:40.418090  527777 kubeadm.go:1161] stopping kube-system containers ...
	I1201 21:13:40.418103  527777 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 21:13:40.418160  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.458573  527777 cri.go:89] found id: ""
	I1201 21:13:40.458639  527777 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 21:13:40.477506  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:13:40.486524  527777 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  1 21:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  1 21:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  1 21:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  1 21:03 /etc/kubernetes/scheduler.conf
	
	I1201 21:13:40.486611  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:13:40.494590  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:13:40.502887  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.502952  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:13:40.511354  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.519815  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.519872  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.528897  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:13:40.537744  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.537819  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:13:40.546165  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:13:40.555103  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:40.603848  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:41.842196  527777 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.238322261s)
	I1201 21:13:41.842271  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.059194  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.130722  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.199813  527777 api_server.go:52] waiting for apiserver process to appear ...
	I1201 21:13:42.199901  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:42.700072  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.200731  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.700027  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.200776  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.700945  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.200498  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.700869  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.200358  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.700900  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.200833  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.700432  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.200342  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.700205  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.200031  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.700873  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.200171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.700532  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.199969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.700026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.200123  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.700046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.200038  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.700680  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.700097  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.200910  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.700336  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.200957  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.700757  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.200131  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.700100  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.200357  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.700032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.200053  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.700687  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.202701  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.700294  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.200032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.700969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.200893  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.700398  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.200784  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.701004  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.200950  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.200806  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.700896  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.200904  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.700082  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.200046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.700894  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.200914  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.700874  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.200345  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.700662  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.200989  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.700974  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.200085  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.200389  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.200064  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.700099  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.200140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.699984  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.200508  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.700076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.200220  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.200107  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.201026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.700092  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.200816  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.700821  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.200768  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.700817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.200081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.700135  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.200076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.700140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.200109  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.700040  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.700221  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.200360  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.700585  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.200737  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.700431  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.200635  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.699983  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.200340  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.700127  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.200075  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.700352  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.200740  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.700086  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.200338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.200785  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.700903  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.200627  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.700920  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.700285  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.200800  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.200091  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.700843  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.200016  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.700190  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.700171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.200767  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.700973  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.200048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.700746  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.200808  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.700037  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:42.200288  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:42.200384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:42.231074  527777 cri.go:89] found id: ""
	I1201 21:14:42.231090  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.231099  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:42.231105  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:42.231205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:42.260877  527777 cri.go:89] found id: ""
	I1201 21:14:42.260892  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.260900  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:42.260906  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:42.260972  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:42.290930  527777 cri.go:89] found id: ""
	I1201 21:14:42.290944  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.290953  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:42.290960  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:42.291034  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:42.323761  527777 cri.go:89] found id: ""
	I1201 21:14:42.323776  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.323784  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:42.323790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:42.323870  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:42.356722  527777 cri.go:89] found id: ""
	I1201 21:14:42.356738  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.356748  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:42.356756  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:42.356820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:42.387639  527777 cri.go:89] found id: ""
	I1201 21:14:42.387654  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.387661  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:42.387667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:42.387738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:42.433777  527777 cri.go:89] found id: ""
	I1201 21:14:42.433791  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.433798  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:42.433806  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:42.433815  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:42.520716  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:42.520743  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:42.536803  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:42.536820  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:42.605090  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:42.605114  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:42.605125  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:42.679935  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:42.679957  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:45.213941  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:45.229905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:45.229984  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:45.276158  527777 cri.go:89] found id: ""
	I1201 21:14:45.276174  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.276181  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:45.276187  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:45.276259  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:45.307844  527777 cri.go:89] found id: ""
	I1201 21:14:45.307859  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.307867  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:45.307872  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:45.307946  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:45.339831  527777 cri.go:89] found id: ""
	I1201 21:14:45.339845  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.339853  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:45.339858  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:45.339922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:45.371617  527777 cri.go:89] found id: ""
	I1201 21:14:45.371632  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.371640  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:45.371646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:45.371705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:45.399984  527777 cri.go:89] found id: ""
	I1201 21:14:45.400005  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.400012  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:45.400017  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:45.400086  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:45.441742  527777 cri.go:89] found id: ""
	I1201 21:14:45.441755  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.441763  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:45.441769  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:45.441843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:45.474201  527777 cri.go:89] found id: ""
	I1201 21:14:45.474216  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.474223  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:45.474231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:45.474241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:45.541899  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:45.541920  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:45.557525  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:45.557541  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:45.623123  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:45.623165  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:45.623176  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:45.703324  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:45.703344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.232324  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:48.242709  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:48.242767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:48.273768  527777 cri.go:89] found id: ""
	I1201 21:14:48.273782  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.273790  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:48.273795  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:48.273853  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:48.305133  527777 cri.go:89] found id: ""
	I1201 21:14:48.305147  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.305154  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:48.305159  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:48.305218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:48.331706  527777 cri.go:89] found id: ""
	I1201 21:14:48.331720  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.331727  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:48.331733  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:48.331805  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:48.357401  527777 cri.go:89] found id: ""
	I1201 21:14:48.357414  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.357421  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:48.357426  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:48.357485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:48.382601  527777 cri.go:89] found id: ""
	I1201 21:14:48.382615  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.382622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:48.382627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:48.382685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:48.414103  527777 cri.go:89] found id: ""
	I1201 21:14:48.414117  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.414124  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:48.414130  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:48.414192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:48.444275  527777 cri.go:89] found id: ""
	I1201 21:14:48.444289  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.444296  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:48.444304  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:48.444315  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:48.509613  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:48.509633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:48.509645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:48.583849  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:48.583868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.611095  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:48.611113  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:48.678045  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:48.678067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.193681  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:51.204158  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:51.204220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:51.228546  527777 cri.go:89] found id: ""
	I1201 21:14:51.228560  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.228567  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:51.228573  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:51.228641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:51.253363  527777 cri.go:89] found id: ""
	I1201 21:14:51.253377  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.253384  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:51.253389  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:51.253450  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:51.281388  527777 cri.go:89] found id: ""
	I1201 21:14:51.281403  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.281410  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:51.281415  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:51.281472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:51.312321  527777 cri.go:89] found id: ""
	I1201 21:14:51.312334  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.312341  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:51.312347  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:51.312404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:51.338071  527777 cri.go:89] found id: ""
	I1201 21:14:51.338084  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.338092  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:51.338097  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:51.338160  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:51.362911  527777 cri.go:89] found id: ""
	I1201 21:14:51.362925  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.362932  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:51.362938  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:51.362996  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:51.392560  527777 cri.go:89] found id: ""
	I1201 21:14:51.392575  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.392582  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:51.392589  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:51.392600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:51.462446  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:51.462465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.483328  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:51.483345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:51.550537  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:51.550546  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:51.550556  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:51.627463  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:51.627484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:54.160747  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:54.171038  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:54.171098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:54.197306  527777 cri.go:89] found id: ""
	I1201 21:14:54.197320  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.197327  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:54.197333  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:54.197389  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:54.227205  527777 cri.go:89] found id: ""
	I1201 21:14:54.227219  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.227226  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:54.227232  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:54.227293  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:54.254126  527777 cri.go:89] found id: ""
	I1201 21:14:54.254141  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.254149  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:54.254156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:54.254218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:54.282152  527777 cri.go:89] found id: ""
	I1201 21:14:54.282166  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.282173  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:54.282178  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:54.282234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:54.312220  527777 cri.go:89] found id: ""
	I1201 21:14:54.312234  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.312241  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:54.312246  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:54.312314  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:54.338233  527777 cri.go:89] found id: ""
	I1201 21:14:54.338247  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.338253  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:54.338259  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:54.338317  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:54.364068  527777 cri.go:89] found id: ""
	I1201 21:14:54.364082  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.364089  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:54.364097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:54.364119  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:54.429655  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:54.429673  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:54.445696  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:54.445712  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:54.514079  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:54.514090  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:54.514100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:54.590504  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:54.590526  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.119842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:57.129802  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:57.129862  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:57.154250  527777 cri.go:89] found id: ""
	I1201 21:14:57.154263  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.154271  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:57.154276  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:57.154332  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:57.179738  527777 cri.go:89] found id: ""
	I1201 21:14:57.179761  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.179768  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:57.179775  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:57.179838  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:57.209881  527777 cri.go:89] found id: ""
	I1201 21:14:57.209895  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.209902  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:57.209907  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:57.209964  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:57.239761  527777 cri.go:89] found id: ""
	I1201 21:14:57.239775  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.239782  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:57.239787  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:57.239851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:57.265438  527777 cri.go:89] found id: ""
	I1201 21:14:57.265457  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.265464  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:57.265470  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:57.265531  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:57.292117  527777 cri.go:89] found id: ""
	I1201 21:14:57.292131  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.292139  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:57.292145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:57.292211  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:57.321507  527777 cri.go:89] found id: ""
	I1201 21:14:57.321526  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.321539  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:57.321547  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:57.321562  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.355489  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:57.355506  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:57.422253  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:57.422274  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:57.439866  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:57.439884  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:57.517974  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:57.517984  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:57.517997  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.095116  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:00.167383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:00.167484  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:00.305857  527777 cri.go:89] found id: ""
	I1201 21:15:00.305874  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.305881  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:00.305888  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:00.305960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:00.412948  527777 cri.go:89] found id: ""
	I1201 21:15:00.412964  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.412972  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:00.412979  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:00.413063  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:00.497486  527777 cri.go:89] found id: ""
	I1201 21:15:00.497503  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.497511  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:00.497517  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:00.497588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:00.548544  527777 cri.go:89] found id: ""
	I1201 21:15:00.548558  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.548565  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:00.548571  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:00.548635  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:00.594658  527777 cri.go:89] found id: ""
	I1201 21:15:00.594674  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.594682  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:00.594688  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:00.594758  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:00.625642  527777 cri.go:89] found id: ""
	I1201 21:15:00.625658  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.625665  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:00.625672  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:00.625741  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:00.657944  527777 cri.go:89] found id: ""
	I1201 21:15:00.657968  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.657977  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:00.657987  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:00.657999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:00.741394  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:00.741407  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:00.741425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.821320  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:00.821344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:00.857348  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:00.857380  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:00.927631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:00.927652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.446387  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:03.456673  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:03.456742  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:03.481752  527777 cri.go:89] found id: ""
	I1201 21:15:03.481766  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.481773  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:03.481779  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:03.481837  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:03.509959  527777 cri.go:89] found id: ""
	I1201 21:15:03.509974  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.509982  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:03.509987  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:03.510050  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:03.536645  527777 cri.go:89] found id: ""
	I1201 21:15:03.536659  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.536665  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:03.536671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:03.536738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:03.562917  527777 cri.go:89] found id: ""
	I1201 21:15:03.562932  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.562939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:03.562945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:03.563005  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:03.589891  527777 cri.go:89] found id: ""
	I1201 21:15:03.589905  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.589912  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:03.589918  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:03.589977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:03.622362  527777 cri.go:89] found id: ""
	I1201 21:15:03.622376  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.622384  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:03.622390  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:03.622451  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:03.649882  527777 cri.go:89] found id: ""
	I1201 21:15:03.649897  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.649904  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:03.649912  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:03.649922  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:03.726812  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:03.726832  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.741643  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:03.741659  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:03.807830  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:03.807840  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:03.807851  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:03.882248  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:03.882268  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.412792  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:06.423457  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:06.423520  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:06.450416  527777 cri.go:89] found id: ""
	I1201 21:15:06.450434  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.450441  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:06.450461  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:06.450552  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:06.476229  527777 cri.go:89] found id: ""
	I1201 21:15:06.476243  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.476251  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:06.476257  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:06.476313  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:06.504311  527777 cri.go:89] found id: ""
	I1201 21:15:06.504326  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.504333  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:06.504339  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:06.504400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:06.531500  527777 cri.go:89] found id: ""
	I1201 21:15:06.531515  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.531523  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:06.531529  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:06.531598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:06.557205  527777 cri.go:89] found id: ""
	I1201 21:15:06.557219  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.557226  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:06.557231  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:06.557296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:06.583224  527777 cri.go:89] found id: ""
	I1201 21:15:06.583237  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.583244  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:06.583250  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:06.583309  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:06.609560  527777 cri.go:89] found id: ""
	I1201 21:15:06.609574  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.609581  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:06.609589  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:06.609600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:06.688119  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:06.688138  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.718171  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:06.718187  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:06.788360  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:06.788382  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:06.803516  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:06.803532  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:06.871576  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.373262  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:09.384129  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:09.384191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:09.415353  527777 cri.go:89] found id: ""
	I1201 21:15:09.415369  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.415377  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:09.415384  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:09.415449  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:09.441666  527777 cri.go:89] found id: ""
	I1201 21:15:09.441681  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.441689  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:09.441707  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:09.441773  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:09.468735  527777 cri.go:89] found id: ""
	I1201 21:15:09.468749  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.468756  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:09.468761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:09.468820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:09.495871  527777 cri.go:89] found id: ""
	I1201 21:15:09.495885  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.495892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:09.495898  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:09.495960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:09.522124  527777 cri.go:89] found id: ""
	I1201 21:15:09.522138  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.522145  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:09.522151  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:09.522222  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:09.548540  527777 cri.go:89] found id: ""
	I1201 21:15:09.548554  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.548562  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:09.548568  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:09.548628  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:09.581799  527777 cri.go:89] found id: ""
	I1201 21:15:09.581814  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.581823  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:09.581831  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:09.581842  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:09.653172  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:09.653196  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:09.668649  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:09.668666  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:09.742062  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.742072  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:09.742085  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:09.817239  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:09.817259  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.348410  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:12.358969  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:12.359036  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:12.384762  527777 cri.go:89] found id: ""
	I1201 21:15:12.384776  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.384783  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:12.384788  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:12.384849  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:12.411423  527777 cri.go:89] found id: ""
	I1201 21:15:12.411437  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.411444  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:12.411449  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:12.411508  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:12.436624  527777 cri.go:89] found id: ""
	I1201 21:15:12.436638  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.436645  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:12.436650  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:12.436708  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:12.462632  527777 cri.go:89] found id: ""
	I1201 21:15:12.462647  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.462654  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:12.462661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:12.462724  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:12.488511  527777 cri.go:89] found id: ""
	I1201 21:15:12.488526  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.488537  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:12.488542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:12.488601  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:12.514421  527777 cri.go:89] found id: ""
	I1201 21:15:12.514434  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.514441  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:12.514448  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:12.514513  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:12.541557  527777 cri.go:89] found id: ""
	I1201 21:15:12.541571  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.541579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:12.541587  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:12.541598  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.573231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:12.573249  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:12.641686  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:12.641707  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:12.658713  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:12.658727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:12.743144  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:12.743155  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:12.743166  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.318465  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:15.329023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:15.329088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:15.358063  527777 cri.go:89] found id: ""
	I1201 21:15:15.358077  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.358084  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:15.358090  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:15.358148  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:15.387949  527777 cri.go:89] found id: ""
	I1201 21:15:15.387963  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.387971  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:15.387976  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:15.388040  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:15.414396  527777 cri.go:89] found id: ""
	I1201 21:15:15.414412  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.414420  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:15.414425  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:15.414489  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:15.440368  527777 cri.go:89] found id: ""
	I1201 21:15:15.440383  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.440390  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:15.440396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:15.440455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:15.471515  527777 cri.go:89] found id: ""
	I1201 21:15:15.471529  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.471538  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:15.471544  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:15.471605  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:15.502736  527777 cri.go:89] found id: ""
	I1201 21:15:15.502750  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.502764  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:15.502770  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:15.502834  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:15.530525  527777 cri.go:89] found id: ""
	I1201 21:15:15.530540  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.530548  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:15.530555  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:15.530566  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:15.597211  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:15.597221  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:15.597232  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.673960  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:15.673983  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:15.708635  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:15.708651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:15.779672  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:15.779693  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.296490  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:18.307184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:18.307258  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:18.340992  527777 cri.go:89] found id: ""
	I1201 21:15:18.341006  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.341021  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:18.341027  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:18.341093  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:18.370602  527777 cri.go:89] found id: ""
	I1201 21:15:18.370626  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.370633  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:18.370642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:18.370713  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:18.398425  527777 cri.go:89] found id: ""
	I1201 21:15:18.398440  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.398447  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:18.398453  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:18.398527  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:18.424514  527777 cri.go:89] found id: ""
	I1201 21:15:18.424530  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.424537  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:18.424561  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:18.424641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:18.451718  527777 cri.go:89] found id: ""
	I1201 21:15:18.451732  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.451740  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:18.451746  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:18.451806  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:18.481779  527777 cri.go:89] found id: ""
	I1201 21:15:18.481804  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.481812  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:18.481818  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:18.481885  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:18.509744  527777 cri.go:89] found id: ""
	I1201 21:15:18.509760  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.509767  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:18.509775  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:18.509800  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:18.541318  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:18.541335  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:18.608586  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:18.608608  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.625859  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:18.625885  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:18.721362  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:18.721371  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:18.721383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.298842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:21.309420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:21.309481  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:21.339650  527777 cri.go:89] found id: ""
	I1201 21:15:21.339664  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.339672  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:21.339678  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:21.339739  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:21.369828  527777 cri.go:89] found id: ""
	I1201 21:15:21.369843  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.369850  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:21.369857  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:21.369925  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:21.396833  527777 cri.go:89] found id: ""
	I1201 21:15:21.396860  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.396868  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:21.396874  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:21.396948  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:21.423340  527777 cri.go:89] found id: ""
	I1201 21:15:21.423354  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.423363  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:21.423369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:21.423429  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:21.450028  527777 cri.go:89] found id: ""
	I1201 21:15:21.450041  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.450051  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:21.450057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:21.450115  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:21.476290  527777 cri.go:89] found id: ""
	I1201 21:15:21.476305  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.476312  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:21.476317  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:21.476378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:21.503570  527777 cri.go:89] found id: ""
	I1201 21:15:21.503591  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.503599  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:21.503607  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:21.503622  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:21.518970  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:21.518995  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:21.583522  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:21.583581  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:21.583592  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.662707  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:21.662730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:21.693467  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:21.693484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.268299  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:24.279383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:24.279455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:24.305720  527777 cri.go:89] found id: ""
	I1201 21:15:24.305733  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.305741  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:24.305746  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:24.305809  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:24.333862  527777 cri.go:89] found id: ""
	I1201 21:15:24.333878  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.333885  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:24.333891  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:24.333965  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:24.365916  527777 cri.go:89] found id: ""
	I1201 21:15:24.365931  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.365939  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:24.365948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:24.366009  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:24.393185  527777 cri.go:89] found id: ""
	I1201 21:15:24.393202  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.393209  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:24.393216  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:24.393279  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:24.419532  527777 cri.go:89] found id: ""
	I1201 21:15:24.419547  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.419554  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:24.419560  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:24.419629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:24.445529  527777 cri.go:89] found id: ""
	I1201 21:15:24.445543  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.445550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:24.445557  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:24.445619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:24.470988  527777 cri.go:89] found id: ""
	I1201 21:15:24.471002  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.471009  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:24.471017  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:24.471028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:24.500416  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:24.500433  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.566009  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:24.566028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:24.582350  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:24.582366  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:24.653085  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:24.653095  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:24.653106  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:27.239323  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:27.250432  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:27.250495  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:27.276796  527777 cri.go:89] found id: ""
	I1201 21:15:27.276824  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.276832  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:27.276837  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:27.276927  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:27.303592  527777 cri.go:89] found id: ""
	I1201 21:15:27.303607  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.303614  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:27.303620  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:27.303685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:27.330141  527777 cri.go:89] found id: ""
	I1201 21:15:27.330155  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.330163  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:27.330168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:27.330231  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:27.358477  527777 cri.go:89] found id: ""
	I1201 21:15:27.358491  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.358498  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:27.358503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:27.358570  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:27.384519  527777 cri.go:89] found id: ""
	I1201 21:15:27.384533  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.384541  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:27.384547  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:27.384610  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:27.410788  527777 cri.go:89] found id: ""
	I1201 21:15:27.410804  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.410811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:27.410817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:27.410880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:27.437727  527777 cri.go:89] found id: ""
	I1201 21:15:27.437742  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.437748  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:27.437756  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:27.437766  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:27.470359  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:27.470376  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:27.540219  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:27.540239  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:27.558165  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:27.558184  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:27.631990  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:27.632001  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:27.632013  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:30.214048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:30.225906  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:30.225977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:30.254528  527777 cri.go:89] found id: ""
	I1201 21:15:30.254544  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.254552  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:30.254559  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:30.254627  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:30.282356  527777 cri.go:89] found id: ""
	I1201 21:15:30.282371  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.282379  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:30.282385  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:30.282454  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:30.316244  527777 cri.go:89] found id: ""
	I1201 21:15:30.316266  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.316275  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:30.316281  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:30.316356  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:30.349310  527777 cri.go:89] found id: ""
	I1201 21:15:30.349324  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.349338  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:30.349345  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:30.349413  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:30.379233  527777 cri.go:89] found id: ""
	I1201 21:15:30.379259  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.379267  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:30.379273  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:30.379344  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:30.410578  527777 cri.go:89] found id: ""
	I1201 21:15:30.410592  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.410600  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:30.410607  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:30.410715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:30.439343  527777 cri.go:89] found id: ""
	I1201 21:15:30.439357  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.439365  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:30.439373  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:30.439383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:30.469722  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:30.469742  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:30.536977  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:30.536999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:30.552719  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:30.552738  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:30.625200  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:30.625210  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:30.625221  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.202525  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:33.213081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:33.213144  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:33.239684  527777 cri.go:89] found id: ""
	I1201 21:15:33.239699  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.239707  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:33.239713  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:33.239777  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:33.270046  527777 cri.go:89] found id: ""
	I1201 21:15:33.270060  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.270067  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:33.270073  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:33.270134  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:33.298615  527777 cri.go:89] found id: ""
	I1201 21:15:33.298631  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.298639  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:33.298646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:33.298715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:33.330389  527777 cri.go:89] found id: ""
	I1201 21:15:33.330403  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.330410  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:33.330416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:33.330472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:33.356054  527777 cri.go:89] found id: ""
	I1201 21:15:33.356068  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.356075  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:33.356081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:33.356147  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:33.385771  527777 cri.go:89] found id: ""
	I1201 21:15:33.385784  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.385792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:33.385797  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:33.385852  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:33.412562  527777 cri.go:89] found id: ""
	I1201 21:15:33.412580  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.412587  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:33.412601  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:33.412616  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:33.478848  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:33.478868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:33.494280  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:33.494296  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:33.574855  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:33.574866  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:33.574876  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.653087  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:33.653110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:36.198878  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:36.209291  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:36.209352  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:36.234666  527777 cri.go:89] found id: ""
	I1201 21:15:36.234679  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.234686  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:36.234691  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:36.234747  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:36.260740  527777 cri.go:89] found id: ""
	I1201 21:15:36.260754  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.260762  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:36.260767  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:36.260830  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:36.290674  527777 cri.go:89] found id: ""
	I1201 21:15:36.290688  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.290695  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:36.290700  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:36.290800  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:36.317381  527777 cri.go:89] found id: ""
	I1201 21:15:36.317396  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.317404  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:36.317410  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:36.317477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:36.346371  527777 cri.go:89] found id: ""
	I1201 21:15:36.346384  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.346391  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:36.346396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:36.346458  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:36.374545  527777 cri.go:89] found id: ""
	I1201 21:15:36.374559  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.374567  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:36.374573  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:36.374632  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:36.400298  527777 cri.go:89] found id: ""
	I1201 21:15:36.400324  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.400332  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:36.400339  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:36.400350  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:36.468826  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:36.468850  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:36.484335  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:36.484351  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:36.549841  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:36.549853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:36.549864  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:36.630562  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:36.630587  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:39.169136  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:39.182222  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:39.182296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:39.212188  527777 cri.go:89] found id: ""
	I1201 21:15:39.212202  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.212208  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:39.212213  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:39.212270  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:39.237215  527777 cri.go:89] found id: ""
	I1201 21:15:39.237229  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.237236  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:39.237241  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:39.237298  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:39.262205  527777 cri.go:89] found id: ""
	I1201 21:15:39.262219  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.262226  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:39.262232  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:39.262288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:39.290471  527777 cri.go:89] found id: ""
	I1201 21:15:39.290485  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.290492  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:39.290498  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:39.290559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:39.316212  527777 cri.go:89] found id: ""
	I1201 21:15:39.316238  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.316245  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:39.316251  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:39.316329  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:39.341014  527777 cri.go:89] found id: ""
	I1201 21:15:39.341037  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.341045  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:39.341051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:39.341109  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:39.375032  527777 cri.go:89] found id: ""
	I1201 21:15:39.375058  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.375067  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:39.375083  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:39.375093  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:39.447422  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:39.447444  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:39.462737  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:39.462754  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:39.534298  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:39.534310  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:39.534320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:39.611187  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:39.611208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.146214  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:42.159004  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:42.159073  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:42.195922  527777 cri.go:89] found id: ""
	I1201 21:15:42.195938  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.195946  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:42.195952  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:42.196022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:42.230178  527777 cri.go:89] found id: ""
	I1201 21:15:42.230193  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.230200  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:42.230206  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:42.230271  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:42.261082  527777 cri.go:89] found id: ""
	I1201 21:15:42.261098  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.261105  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:42.261111  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:42.261188  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:42.295345  527777 cri.go:89] found id: ""
	I1201 21:15:42.295361  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.295377  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:42.295383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:42.295457  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:42.330093  527777 cri.go:89] found id: ""
	I1201 21:15:42.330109  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.330116  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:42.330122  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:42.330186  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:42.358733  527777 cri.go:89] found id: ""
	I1201 21:15:42.358748  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.358756  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:42.358761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:42.358823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:42.388218  527777 cri.go:89] found id: ""
	I1201 21:15:42.388233  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.388240  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:42.388247  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:42.388258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:42.469165  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:42.469185  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.500328  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:42.500345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:42.569622  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:42.569642  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:42.585628  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:42.585645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:42.654077  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.155990  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:45.177587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:45.177664  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:45.216123  527777 cri.go:89] found id: ""
	I1201 21:15:45.216141  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.216149  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:45.216155  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:45.216241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:45.257016  527777 cri.go:89] found id: ""
	I1201 21:15:45.257036  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.257044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:45.257053  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:45.257139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:45.310072  527777 cri.go:89] found id: ""
	I1201 21:15:45.310087  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.310095  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:45.310101  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:45.310165  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:45.339040  527777 cri.go:89] found id: ""
	I1201 21:15:45.339054  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.339062  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:45.339068  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:45.339154  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:45.370340  527777 cri.go:89] found id: ""
	I1201 21:15:45.370354  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.370361  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:45.370366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:45.370426  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:45.396213  527777 cri.go:89] found id: ""
	I1201 21:15:45.396227  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.396234  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:45.396240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:45.396299  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:45.423726  527777 cri.go:89] found id: ""
	I1201 21:15:45.423745  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.423755  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:45.423773  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:45.423784  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:45.490150  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.490161  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:45.490172  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:45.565908  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:45.565926  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:45.598740  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:45.598755  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:45.666263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:45.666281  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.183348  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:48.193996  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:48.194068  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:48.221096  527777 cri.go:89] found id: ""
	I1201 21:15:48.221110  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.221117  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:48.221123  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:48.221180  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:48.247305  527777 cri.go:89] found id: ""
	I1201 21:15:48.247320  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.247328  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:48.247333  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:48.247392  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:48.277432  527777 cri.go:89] found id: ""
	I1201 21:15:48.277447  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.277453  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:48.277459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:48.277521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:48.304618  527777 cri.go:89] found id: ""
	I1201 21:15:48.304636  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.304643  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:48.304649  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:48.304712  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:48.331672  527777 cri.go:89] found id: ""
	I1201 21:15:48.331686  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.331694  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:48.331699  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:48.331757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:48.360554  527777 cri.go:89] found id: ""
	I1201 21:15:48.360569  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.360577  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:48.360583  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:48.360640  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:48.385002  527777 cri.go:89] found id: ""
	I1201 21:15:48.385016  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.385023  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:48.385032  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:48.385043  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:48.414019  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:48.414036  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:48.479945  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:48.479964  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.495187  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:48.495206  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:48.560181  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:48.560191  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:48.560203  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.136751  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:51.147836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:51.147914  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:51.178020  527777 cri.go:89] found id: ""
	I1201 21:15:51.178033  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.178041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:51.178046  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:51.178106  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:51.206023  527777 cri.go:89] found id: ""
	I1201 21:15:51.206036  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.206044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:51.206049  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:51.206150  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:51.236344  527777 cri.go:89] found id: ""
	I1201 21:15:51.236359  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.236366  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:51.236371  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:51.236434  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:51.262331  527777 cri.go:89] found id: ""
	I1201 21:15:51.262346  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.262353  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:51.262359  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:51.262419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:51.290923  527777 cri.go:89] found id: ""
	I1201 21:15:51.290936  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.290944  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:51.290949  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:51.291016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:51.318520  527777 cri.go:89] found id: ""
	I1201 21:15:51.318535  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.318542  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:51.318548  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:51.318607  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:51.345816  527777 cri.go:89] found id: ""
	I1201 21:15:51.345830  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.345837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:51.345845  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:51.345857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:51.361084  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:51.361100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:51.427299  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:51.427309  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:51.427320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.502906  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:51.502929  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:51.533675  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:51.533691  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.100640  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:54.111984  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:54.112047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:54.137333  527777 cri.go:89] found id: ""
	I1201 21:15:54.137347  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.137353  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:54.137360  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:54.137419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:54.166609  527777 cri.go:89] found id: ""
	I1201 21:15:54.166624  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.166635  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:54.166640  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:54.166705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:54.193412  527777 cri.go:89] found id: ""
	I1201 21:15:54.193434  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.193441  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:54.193447  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:54.193509  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:54.219156  527777 cri.go:89] found id: ""
	I1201 21:15:54.219171  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.219178  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:54.219184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:54.219241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:54.248184  527777 cri.go:89] found id: ""
	I1201 21:15:54.248197  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.248204  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:54.248210  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:54.248278  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:54.274909  527777 cri.go:89] found id: ""
	I1201 21:15:54.274923  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.274931  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:54.274936  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:54.275003  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:54.300114  527777 cri.go:89] found id: ""
	I1201 21:15:54.300128  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.300135  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:54.300143  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:54.300154  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.366293  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:54.366312  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:54.382194  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:54.382210  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:54.446526  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:54.446536  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:54.446548  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:54.525097  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:54.525120  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.056605  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:57.067114  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:57.067185  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:57.096913  527777 cri.go:89] found id: ""
	I1201 21:15:57.096926  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.096933  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:57.096939  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:57.096995  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:57.124785  527777 cri.go:89] found id: ""
	I1201 21:15:57.124799  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.124806  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:57.124812  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:57.124877  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:57.151613  527777 cri.go:89] found id: ""
	I1201 21:15:57.151628  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.151635  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:57.151640  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:57.151702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:57.181422  527777 cri.go:89] found id: ""
	I1201 21:15:57.181437  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.181445  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:57.181451  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:57.181510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:57.207775  527777 cri.go:89] found id: ""
	I1201 21:15:57.207789  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.207796  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:57.207801  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:57.207861  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:57.232906  527777 cri.go:89] found id: ""
	I1201 21:15:57.232931  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.232939  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:57.232945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:57.233016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:57.259075  527777 cri.go:89] found id: ""
	I1201 21:15:57.259100  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.259107  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:57.259115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:57.259126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.288148  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:57.288164  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:57.355525  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:57.355545  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:57.371229  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:57.371246  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:57.439767  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:57.439779  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:57.439791  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.016574  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:00.063670  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:00.063743  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:00.181922  527777 cri.go:89] found id: ""
	I1201 21:16:00.181939  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.181947  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:00.181954  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:00.183169  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:00.318653  527777 cri.go:89] found id: ""
	I1201 21:16:00.318668  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.318676  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:00.318682  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:00.318752  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:00.366365  527777 cri.go:89] found id: ""
	I1201 21:16:00.366381  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.366391  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:00.366398  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:00.366497  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:00.432333  527777 cri.go:89] found id: ""
	I1201 21:16:00.432349  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.432358  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:00.432364  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:00.432436  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:00.487199  527777 cri.go:89] found id: ""
	I1201 21:16:00.487216  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.487238  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:00.487244  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:00.487315  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:00.541398  527777 cri.go:89] found id: ""
	I1201 21:16:00.541429  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.541438  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:00.541444  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:00.541530  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:00.577064  527777 cri.go:89] found id: ""
	I1201 21:16:00.577082  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.577095  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:00.577103  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:00.577116  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:00.646395  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:00.646418  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:00.667724  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:00.667741  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:00.750849  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:00.750860  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:00.750872  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.828858  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:00.828881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.360481  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:03.371537  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:03.371611  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:03.401359  527777 cri.go:89] found id: ""
	I1201 21:16:03.401373  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.401380  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:03.401385  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:03.401452  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:03.428335  527777 cri.go:89] found id: ""
	I1201 21:16:03.428350  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.428358  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:03.428363  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:03.428424  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:03.460610  527777 cri.go:89] found id: ""
	I1201 21:16:03.460623  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.460630  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:03.460636  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:03.460695  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:03.489139  527777 cri.go:89] found id: ""
	I1201 21:16:03.489153  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.489161  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:03.489168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:03.489234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:03.519388  527777 cri.go:89] found id: ""
	I1201 21:16:03.519410  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.519418  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:03.519423  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:03.519490  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:03.549588  527777 cri.go:89] found id: ""
	I1201 21:16:03.549602  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.549610  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:03.549615  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:03.549678  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:03.576025  527777 cri.go:89] found id: ""
	I1201 21:16:03.576039  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.576047  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:03.576055  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:03.576066  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.605415  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:03.605431  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:03.675775  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:03.675797  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:03.691777  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:03.691793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:03.765238  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:03.765250  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:03.765263  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.346338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:06.356267  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:06.356325  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:06.380678  527777 cri.go:89] found id: ""
	I1201 21:16:06.380691  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.380717  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:06.380723  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:06.380780  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:06.410489  527777 cri.go:89] found id: ""
	I1201 21:16:06.410503  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.410518  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:06.410524  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:06.410588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:06.443231  527777 cri.go:89] found id: ""
	I1201 21:16:06.443250  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.443257  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:06.443263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:06.443334  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:06.468603  527777 cri.go:89] found id: ""
	I1201 21:16:06.468618  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.468625  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:06.468631  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:06.468700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:06.493128  527777 cri.go:89] found id: ""
	I1201 21:16:06.493141  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.493148  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:06.493154  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:06.493212  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:06.518860  527777 cri.go:89] found id: ""
	I1201 21:16:06.518874  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.518881  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:06.518886  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:06.518958  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:06.545817  527777 cri.go:89] found id: ""
	I1201 21:16:06.545831  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.545839  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:06.545846  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:06.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:06.610356  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:06.610378  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:06.625472  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:06.625488  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:06.722623  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:06.722633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:06.722648  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.798208  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:06.798228  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.328391  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:09.339639  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:09.339706  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:09.368398  527777 cri.go:89] found id: ""
	I1201 21:16:09.368421  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.368428  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:09.368434  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:09.368512  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:09.398525  527777 cri.go:89] found id: ""
	I1201 21:16:09.398540  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.398548  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:09.398553  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:09.398615  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:09.426105  527777 cri.go:89] found id: ""
	I1201 21:16:09.426121  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.426129  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:09.426145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:09.426205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:09.456433  527777 cri.go:89] found id: ""
	I1201 21:16:09.456449  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.456456  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:09.456462  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:09.456525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:09.488473  527777 cri.go:89] found id: ""
	I1201 21:16:09.488488  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.488495  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:09.488503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:09.488563  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:09.514937  527777 cri.go:89] found id: ""
	I1201 21:16:09.514951  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.514958  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:09.514964  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:09.515027  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:09.545815  527777 cri.go:89] found id: ""
	I1201 21:16:09.545829  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.545837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:09.545845  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:09.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.575097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:09.575115  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:09.642216  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:09.642237  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:09.663629  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:09.663645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:09.745863  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:09.745876  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:09.745888  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.327853  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:12.338928  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:12.338992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:12.372550  527777 cri.go:89] found id: ""
	I1201 21:16:12.372583  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.372591  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:12.372597  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:12.372662  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:12.402760  527777 cri.go:89] found id: ""
	I1201 21:16:12.402776  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.402784  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:12.402790  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:12.402851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:12.429193  527777 cri.go:89] found id: ""
	I1201 21:16:12.429208  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.429215  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:12.429221  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:12.429286  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:12.456952  527777 cri.go:89] found id: ""
	I1201 21:16:12.456966  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.456973  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:12.456978  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:12.457037  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:12.483859  527777 cri.go:89] found id: ""
	I1201 21:16:12.483874  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.483881  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:12.483887  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:12.483950  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:12.510218  527777 cri.go:89] found id: ""
	I1201 21:16:12.510234  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.510242  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:12.510248  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:12.510323  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:12.536841  527777 cri.go:89] found id: ""
	I1201 21:16:12.536856  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.536864  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:12.536871  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:12.536881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.612682  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:12.612702  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:12.641218  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:12.641235  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:12.719908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:12.719930  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:12.736058  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:12.736077  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:12.803643  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.304417  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:15.314647  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:15.314707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:15.342468  527777 cri.go:89] found id: ""
	I1201 21:16:15.342483  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.342491  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:15.342497  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:15.342559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:15.369048  527777 cri.go:89] found id: ""
	I1201 21:16:15.369063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.369071  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:15.369077  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:15.369140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:15.393869  527777 cri.go:89] found id: ""
	I1201 21:16:15.393884  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.393891  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:15.393897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:15.393960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:15.420049  527777 cri.go:89] found id: ""
	I1201 21:16:15.420063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.420071  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:15.420077  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:15.420136  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:15.450112  527777 cri.go:89] found id: ""
	I1201 21:16:15.450126  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.450134  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:15.450140  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:15.450201  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:15.475788  527777 cri.go:89] found id: ""
	I1201 21:16:15.475803  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.475811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:15.475884  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:15.502058  527777 cri.go:89] found id: ""
	I1201 21:16:15.502072  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.502084  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:15.502092  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:15.502102  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:15.535936  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:15.535953  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:15.601548  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:15.601568  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:15.617150  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:15.617167  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:15.694491  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.694502  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:15.694514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.282089  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:18.292620  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:18.292687  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:18.320483  527777 cri.go:89] found id: ""
	I1201 21:16:18.320497  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.320504  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:18.320510  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:18.320569  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:18.346376  527777 cri.go:89] found id: ""
	I1201 21:16:18.346389  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.346397  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:18.346402  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:18.346459  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:18.377534  527777 cri.go:89] found id: ""
	I1201 21:16:18.377549  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.377557  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:18.377562  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:18.377619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:18.402867  527777 cri.go:89] found id: ""
	I1201 21:16:18.402882  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.402892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:18.402897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:18.402952  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:18.429104  527777 cri.go:89] found id: ""
	I1201 21:16:18.429119  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.429126  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:18.429132  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:18.429193  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:18.455237  527777 cri.go:89] found id: ""
	I1201 21:16:18.455251  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.455257  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:18.455263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:18.455330  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:18.480176  527777 cri.go:89] found id: ""
	I1201 21:16:18.480190  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.480197  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:18.480205  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:18.480215  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.554692  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:18.554713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:18.586044  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:18.586062  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:18.654056  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:18.654076  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:18.670115  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:18.670131  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:18.739729  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.240925  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:21.251332  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:21.251400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:21.277213  527777 cri.go:89] found id: ""
	I1201 21:16:21.277228  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.277266  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:21.277275  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:21.277349  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:21.304294  527777 cri.go:89] found id: ""
	I1201 21:16:21.304308  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.304316  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:21.304321  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:21.304393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:21.331354  527777 cri.go:89] found id: ""
	I1201 21:16:21.331369  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.331377  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:21.331382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:21.331455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:21.358548  527777 cri.go:89] found id: ""
	I1201 21:16:21.358563  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.358571  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:21.358577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:21.358637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:21.384228  527777 cri.go:89] found id: ""
	I1201 21:16:21.384242  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.384250  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:21.384255  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:21.384321  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:21.413560  527777 cri.go:89] found id: ""
	I1201 21:16:21.413574  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.413581  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:21.413587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:21.413647  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:21.439790  527777 cri.go:89] found id: ""
	I1201 21:16:21.439805  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.439813  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:21.439821  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:21.439839  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:21.505587  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:21.505607  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:21.522038  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:21.522064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:21.590692  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.590718  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:21.590730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:21.667703  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:21.667727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.203209  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:24.214159  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:24.214230  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:24.242378  527777 cri.go:89] found id: ""
	I1201 21:16:24.242392  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.242399  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:24.242405  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:24.242486  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:24.269017  527777 cri.go:89] found id: ""
	I1201 21:16:24.269032  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.269039  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:24.269045  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:24.269103  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:24.295927  527777 cri.go:89] found id: ""
	I1201 21:16:24.295942  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.295949  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:24.295955  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:24.296019  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:24.321917  527777 cri.go:89] found id: ""
	I1201 21:16:24.321932  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.321939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:24.321944  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:24.322012  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:24.350147  527777 cri.go:89] found id: ""
	I1201 21:16:24.350163  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.350171  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:24.350177  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:24.350250  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:24.376131  527777 cri.go:89] found id: ""
	I1201 21:16:24.376145  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.376153  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:24.376160  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:24.376220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:24.403024  527777 cri.go:89] found id: ""
	I1201 21:16:24.403039  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.403046  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:24.403055  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:24.403068  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:24.418212  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:24.418230  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:24.486448  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:24.486460  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:24.486472  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:24.563285  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:24.563307  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.597003  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:24.597023  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.167466  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:27.179061  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:27.179139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:27.210380  527777 cri.go:89] found id: ""
	I1201 21:16:27.210394  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.210402  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:27.210409  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:27.210474  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:27.238732  527777 cri.go:89] found id: ""
	I1201 21:16:27.238747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.238754  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:27.238760  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:27.238827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:27.265636  527777 cri.go:89] found id: ""
	I1201 21:16:27.265652  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.265661  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:27.265667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:27.265736  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:27.292213  527777 cri.go:89] found id: ""
	I1201 21:16:27.292228  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.292235  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:27.292241  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:27.292300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:27.324732  527777 cri.go:89] found id: ""
	I1201 21:16:27.324747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.324755  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:27.324762  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:27.324827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:27.352484  527777 cri.go:89] found id: ""
	I1201 21:16:27.352499  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.352507  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:27.352513  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:27.352590  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:27.384113  527777 cri.go:89] found id: ""
	I1201 21:16:27.384128  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.384136  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:27.384144  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:27.384155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:27.415615  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:27.415634  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.482296  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:27.482319  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:27.498829  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:27.498846  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:27.569732  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:27.569744  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:27.569757  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.145371  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:30.156840  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:30.156922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:30.184704  527777 cri.go:89] found id: ""
	I1201 21:16:30.184719  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.184727  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:30.184733  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:30.184795  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:30.213086  527777 cri.go:89] found id: ""
	I1201 21:16:30.213110  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.213120  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:30.213125  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:30.213192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:30.245472  527777 cri.go:89] found id: ""
	I1201 21:16:30.245486  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.245494  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:30.245499  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:30.245565  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:30.273463  527777 cri.go:89] found id: ""
	I1201 21:16:30.273477  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.273485  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:30.273491  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:30.273557  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:30.302141  527777 cri.go:89] found id: ""
	I1201 21:16:30.302156  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.302164  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:30.302170  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:30.302232  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:30.329744  527777 cri.go:89] found id: ""
	I1201 21:16:30.329758  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.329765  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:30.329771  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:30.329833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:30.356049  527777 cri.go:89] found id: ""
	I1201 21:16:30.356063  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.356071  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:30.356079  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:30.356110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:30.424124  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:30.424134  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:30.424145  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.498989  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:30.499009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:30.536189  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:30.536208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:30.601111  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:30.601130  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.116248  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:33.129790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:33.129876  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:33.162072  527777 cri.go:89] found id: ""
	I1201 21:16:33.162085  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.162093  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:33.162098  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:33.162168  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:33.188853  527777 cri.go:89] found id: ""
	I1201 21:16:33.188868  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.188875  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:33.188881  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:33.188944  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:33.215527  527777 cri.go:89] found id: ""
	I1201 21:16:33.215541  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.215548  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:33.215554  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:33.215613  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:33.241336  527777 cri.go:89] found id: ""
	I1201 21:16:33.241350  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.241357  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:33.241363  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:33.241422  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:33.267551  527777 cri.go:89] found id: ""
	I1201 21:16:33.267564  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.267571  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:33.267576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:33.267639  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:33.293257  527777 cri.go:89] found id: ""
	I1201 21:16:33.293273  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.293280  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:33.293286  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:33.293346  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:33.324702  527777 cri.go:89] found id: ""
	I1201 21:16:33.324717  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.324725  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:33.324733  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:33.324745  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:33.393448  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:33.393473  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.409048  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:33.409075  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:33.473709  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:33.473720  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:33.473731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:33.549174  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:33.549194  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:36.083124  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:36.093860  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:36.093919  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:36.122911  527777 cri.go:89] found id: ""
	I1201 21:16:36.122925  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.122932  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:36.122938  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:36.123000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:36.148002  527777 cri.go:89] found id: ""
	I1201 21:16:36.148016  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.148023  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:36.148028  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:36.148088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:36.173008  527777 cri.go:89] found id: ""
	I1201 21:16:36.173022  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.173029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:36.173034  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:36.173092  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:36.198828  527777 cri.go:89] found id: ""
	I1201 21:16:36.198841  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.198848  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:36.198854  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:36.198909  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:36.224001  527777 cri.go:89] found id: ""
	I1201 21:16:36.224015  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.224022  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:36.224027  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:36.224085  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:36.249054  527777 cri.go:89] found id: ""
	I1201 21:16:36.249068  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.249075  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:36.249080  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:36.249140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:36.273000  527777 cri.go:89] found id: ""
	I1201 21:16:36.273014  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.273021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:36.273029  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:36.273039  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:36.337502  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:36.337521  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:36.353315  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:36.353331  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:36.424612  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:36.424623  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:36.424633  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:36.503070  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:36.503100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:39.034568  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:39.045696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:39.045760  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:39.071542  527777 cri.go:89] found id: ""
	I1201 21:16:39.071555  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.071563  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:39.071569  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:39.071630  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:39.102301  527777 cri.go:89] found id: ""
	I1201 21:16:39.102315  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.102322  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:39.102328  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:39.102384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:39.129808  527777 cri.go:89] found id: ""
	I1201 21:16:39.129823  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.129830  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:39.129836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:39.129895  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:39.155555  527777 cri.go:89] found id: ""
	I1201 21:16:39.155569  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.155576  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:39.155582  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:39.155650  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:39.186394  527777 cri.go:89] found id: ""
	I1201 21:16:39.186408  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.186415  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:39.186420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:39.186485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:39.213875  527777 cri.go:89] found id: ""
	I1201 21:16:39.213889  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.213896  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:39.213901  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:39.213957  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:39.243609  527777 cri.go:89] found id: ""
	I1201 21:16:39.243623  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.243631  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:39.243640  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:39.243652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:39.307878  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:39.307897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:39.322972  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:39.322989  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:39.391843  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:39.391853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:39.391869  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:39.471894  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:39.471915  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.007008  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:42.029520  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:42.029588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:42.057505  527777 cri.go:89] found id: ""
	I1201 21:16:42.057520  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.057528  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:42.057534  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:42.057598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:42.097060  527777 cri.go:89] found id: ""
	I1201 21:16:42.097086  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.097094  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:42.097100  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:42.097191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:42.136029  527777 cri.go:89] found id: ""
	I1201 21:16:42.136048  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.136058  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:42.136064  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:42.136155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:42.183711  527777 cri.go:89] found id: ""
	I1201 21:16:42.183733  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.183743  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:42.183750  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:42.183825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:42.219282  527777 cri.go:89] found id: ""
	I1201 21:16:42.219298  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.219320  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:42.219326  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:42.219393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:42.248969  527777 cri.go:89] found id: ""
	I1201 21:16:42.248986  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.248994  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:42.249005  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:42.249079  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:42.283438  527777 cri.go:89] found id: ""
	I1201 21:16:42.283452  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.283459  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:42.283467  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:42.283479  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:42.355657  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:42.355675  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:42.355686  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:42.432138  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:42.432158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.466460  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:42.466475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:42.532633  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:42.532653  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.050487  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:45.077310  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:45.077404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:45.125431  527777 cri.go:89] found id: ""
	I1201 21:16:45.125455  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.125463  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:45.125469  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:45.125541  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:45.159113  527777 cri.go:89] found id: ""
	I1201 21:16:45.159151  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.159161  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:45.159167  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:45.159238  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:45.205059  527777 cri.go:89] found id: ""
	I1201 21:16:45.205075  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.205084  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:45.205092  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:45.205213  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:45.256952  527777 cri.go:89] found id: ""
	I1201 21:16:45.257035  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.257044  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:45.257051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:45.257244  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:45.299953  527777 cri.go:89] found id: ""
	I1201 21:16:45.299967  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.299975  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:45.299981  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:45.300047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:45.334546  527777 cri.go:89] found id: ""
	I1201 21:16:45.334562  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.334570  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:45.334576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:45.334641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:45.366922  527777 cri.go:89] found id: ""
	I1201 21:16:45.366936  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.366944  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:45.366952  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:45.366973  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.384985  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:45.385003  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:45.455424  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:45.455434  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:45.455446  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:45.532668  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:45.532689  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:45.572075  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:45.572092  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.147493  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:48.158252  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:48.158331  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:48.185671  527777 cri.go:89] found id: ""
	I1201 21:16:48.185685  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.185692  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:48.185697  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:48.185766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:48.211977  527777 cri.go:89] found id: ""
	I1201 21:16:48.211991  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.211998  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:48.212003  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:48.212059  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:48.238605  527777 cri.go:89] found id: ""
	I1201 21:16:48.238620  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.238627  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:48.238632  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:48.238691  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:48.272407  527777 cri.go:89] found id: ""
	I1201 21:16:48.272421  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.272428  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:48.272433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:48.272491  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:48.300451  527777 cri.go:89] found id: ""
	I1201 21:16:48.300465  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.300472  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:48.300478  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:48.300543  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:48.326518  527777 cri.go:89] found id: ""
	I1201 21:16:48.326542  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.326550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:48.326555  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:48.326629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:48.353027  527777 cri.go:89] found id: ""
	I1201 21:16:48.353043  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.353050  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:48.353059  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:48.353070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.418908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:48.418928  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:48.435338  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:48.435358  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:48.502670  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:48.502708  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:48.502718  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:48.579198  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:48.579219  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.111632  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:51.122895  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:51.122970  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:51.149845  527777 cri.go:89] found id: ""
	I1201 21:16:51.149859  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.149867  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:51.149872  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:51.149937  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:51.182385  527777 cri.go:89] found id: ""
	I1201 21:16:51.182399  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.182406  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:51.182411  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:51.182473  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:51.207954  527777 cri.go:89] found id: ""
	I1201 21:16:51.207967  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.208015  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:51.208024  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:51.208080  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:51.233058  527777 cri.go:89] found id: ""
	I1201 21:16:51.233071  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.233077  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:51.233083  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:51.233146  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:51.259105  527777 cri.go:89] found id: ""
	I1201 21:16:51.259119  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.259127  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:51.259147  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:51.259205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:51.284870  527777 cri.go:89] found id: ""
	I1201 21:16:51.284884  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.284891  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:51.284896  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:51.284953  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:51.312084  527777 cri.go:89] found id: ""
	I1201 21:16:51.312099  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.312106  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:51.312115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:51.312126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.342115  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:51.342134  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:51.408816  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:51.408836  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:51.425032  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:51.425054  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:51.494088  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:51.494097  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:51.494107  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.070393  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:54.082393  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:54.082464  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:54.112007  527777 cri.go:89] found id: ""
	I1201 21:16:54.112033  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.112041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:54.112048  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:54.112120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:54.142629  527777 cri.go:89] found id: ""
	I1201 21:16:54.142643  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.142650  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:54.142656  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:54.142715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:54.170596  527777 cri.go:89] found id: ""
	I1201 21:16:54.170611  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.170618  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:54.170623  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:54.170685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:54.199276  527777 cri.go:89] found id: ""
	I1201 21:16:54.199301  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.199309  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:54.199314  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:54.199385  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:54.229268  527777 cri.go:89] found id: ""
	I1201 21:16:54.229285  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.229294  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:54.229300  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:54.229378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:54.261273  527777 cri.go:89] found id: ""
	I1201 21:16:54.261289  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.261298  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:54.261306  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:54.261409  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:54.289154  527777 cri.go:89] found id: ""
	I1201 21:16:54.289169  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.289189  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:54.289199  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:54.289216  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:54.363048  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:54.363059  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:54.363070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.440875  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:54.440897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:54.471338  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:54.471355  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:54.543810  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:54.543830  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.061388  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:57.071929  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:57.071998  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:57.102516  527777 cri.go:89] found id: ""
	I1201 21:16:57.102531  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.102540  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:57.102546  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:57.102614  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:57.129734  527777 cri.go:89] found id: ""
	I1201 21:16:57.129749  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.129756  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:57.129761  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:57.129825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:57.160948  527777 cri.go:89] found id: ""
	I1201 21:16:57.160962  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.160971  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:57.160977  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:57.161049  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:57.192059  527777 cri.go:89] found id: ""
	I1201 21:16:57.192075  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.192082  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:57.192088  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:57.192155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:57.217906  527777 cri.go:89] found id: ""
	I1201 21:16:57.217920  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.217927  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:57.217932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:57.217992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:57.246391  527777 cri.go:89] found id: ""
	I1201 21:16:57.246406  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.246414  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:57.246420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:57.246480  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:57.273534  527777 cri.go:89] found id: ""
	I1201 21:16:57.273558  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.273565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:57.273573  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:57.273585  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:57.338589  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:57.338609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.354225  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:57.354241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:57.425192  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:57.425202  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:57.425213  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:57.501690  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:57.501713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:00.031846  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:00.071974  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:00.072071  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:00.158888  527777 cri.go:89] found id: ""
	I1201 21:17:00.158904  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.158912  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:00.158918  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:00.158994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:00.267283  527777 cri.go:89] found id: ""
	I1201 21:17:00.267299  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.267306  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:00.267312  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:00.267395  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:00.331710  527777 cri.go:89] found id: ""
	I1201 21:17:00.331725  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.331733  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:00.331740  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:00.331821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:00.416435  527777 cri.go:89] found id: ""
	I1201 21:17:00.416468  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.416476  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:00.416482  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:00.416566  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:00.456878  527777 cri.go:89] found id: ""
	I1201 21:17:00.456894  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.456904  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:00.456909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:00.456979  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:00.511096  527777 cri.go:89] found id: ""
	I1201 21:17:00.511113  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.511122  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:00.511166  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:00.511245  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:00.565444  527777 cri.go:89] found id: ""
	I1201 21:17:00.565463  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.565471  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:00.565480  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:00.565498  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:00.641086  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:00.641121  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:00.662045  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:00.662064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:00.750234  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:00.750246  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:00.750258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:00.828511  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:00.828539  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:03.366405  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:03.379053  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:03.379127  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:03.412977  527777 cri.go:89] found id: ""
	I1201 21:17:03.412991  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.412999  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:03.413005  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:03.413074  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:03.442789  527777 cri.go:89] found id: ""
	I1201 21:17:03.442817  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.442827  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:03.442834  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:03.442956  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:03.472731  527777 cri.go:89] found id: ""
	I1201 21:17:03.472758  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.472767  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:03.472772  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:03.472843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:03.503719  527777 cri.go:89] found id: ""
	I1201 21:17:03.503735  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.503744  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:03.503751  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:03.503823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:03.533642  527777 cri.go:89] found id: ""
	I1201 21:17:03.533658  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.533665  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:03.533671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:03.533749  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:03.562889  527777 cri.go:89] found id: ""
	I1201 21:17:03.562908  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.562916  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:03.562922  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:03.563006  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:03.592257  527777 cri.go:89] found id: ""
	I1201 21:17:03.592275  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.592283  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:03.592291  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:03.592303  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:03.660263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:03.660282  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:03.683357  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:03.683375  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:03.765695  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:03.765707  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:03.765719  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:03.842543  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:03.842567  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.376185  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:06.387932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:06.388000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:06.417036  527777 cri.go:89] found id: ""
	I1201 21:17:06.417050  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.417058  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:06.417064  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:06.417125  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:06.447064  527777 cri.go:89] found id: ""
	I1201 21:17:06.447090  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.447098  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:06.447104  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:06.447207  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:06.476879  527777 cri.go:89] found id: ""
	I1201 21:17:06.476893  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.476900  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:06.476905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:06.476968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:06.506320  527777 cri.go:89] found id: ""
	I1201 21:17:06.506338  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.506346  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:06.506352  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:06.506419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:06.535420  527777 cri.go:89] found id: ""
	I1201 21:17:06.535443  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.535451  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:06.535458  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:06.535525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:06.563751  527777 cri.go:89] found id: ""
	I1201 21:17:06.563784  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.563792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:06.563798  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:06.563865  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:06.597779  527777 cri.go:89] found id: ""
	I1201 21:17:06.597795  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.597803  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:06.597811  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:06.597823  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:06.681458  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:06.681470  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:06.681482  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:06.778343  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:06.778369  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.812835  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:06.812854  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:06.886097  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:06.886123  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.404611  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:09.415307  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:09.415386  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:09.454145  527777 cri.go:89] found id: ""
	I1201 21:17:09.454159  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.454168  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:09.454174  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:09.454240  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:09.483869  527777 cri.go:89] found id: ""
	I1201 21:17:09.483885  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.483893  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:09.483899  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:09.483961  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:09.510637  527777 cri.go:89] found id: ""
	I1201 21:17:09.510650  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.510657  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:09.510662  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:09.510719  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:09.542823  527777 cri.go:89] found id: ""
	I1201 21:17:09.542837  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.542844  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:09.542849  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:09.542911  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:09.570165  527777 cri.go:89] found id: ""
	I1201 21:17:09.570184  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.570191  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:09.570196  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:09.570254  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:09.595630  527777 cri.go:89] found id: ""
	I1201 21:17:09.595645  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.595652  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:09.595658  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:09.595722  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:09.621205  527777 cri.go:89] found id: ""
	I1201 21:17:09.621219  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.621226  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:09.621234  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:09.621244  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:09.700160  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:09.700182  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:09.739401  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:09.739425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:09.809572  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:09.809594  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.828869  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:09.828886  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:09.920701  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.421012  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:12.432213  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:12.432287  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:12.459734  527777 cri.go:89] found id: ""
	I1201 21:17:12.459757  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.459765  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:12.459771  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:12.459840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:12.485671  527777 cri.go:89] found id: ""
	I1201 21:17:12.485685  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.485692  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:12.485698  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:12.485757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:12.511548  527777 cri.go:89] found id: ""
	I1201 21:17:12.511564  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.511572  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:12.511577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:12.511637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:12.542030  527777 cri.go:89] found id: ""
	I1201 21:17:12.542046  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.542053  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:12.542060  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:12.542120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:12.567661  527777 cri.go:89] found id: ""
	I1201 21:17:12.567675  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.567691  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:12.567696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:12.567766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:12.597625  527777 cri.go:89] found id: ""
	I1201 21:17:12.597640  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.597647  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:12.597653  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:12.597718  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:12.623694  527777 cri.go:89] found id: ""
	I1201 21:17:12.623708  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.623715  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:12.623722  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:12.623733  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:12.638757  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:12.638772  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:12.731591  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.731601  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:12.731612  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:12.808720  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:12.808739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:12.838448  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:12.838465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:15.411670  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:15.422227  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:15.422288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:15.449244  527777 cri.go:89] found id: ""
	I1201 21:17:15.449267  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.449275  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:15.449281  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:15.449351  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:15.475790  527777 cri.go:89] found id: ""
	I1201 21:17:15.475804  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.475812  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:15.475883  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:15.505030  527777 cri.go:89] found id: ""
	I1201 21:17:15.505044  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.505052  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:15.505057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:15.505121  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:15.535702  527777 cri.go:89] found id: ""
	I1201 21:17:15.535717  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.535726  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:15.535732  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:15.535802  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:15.561881  527777 cri.go:89] found id: ""
	I1201 21:17:15.561895  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.561903  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:15.561909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:15.561968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:15.589608  527777 cri.go:89] found id: ""
	I1201 21:17:15.589623  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.589631  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:15.589637  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:15.589704  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:15.617545  527777 cri.go:89] found id: ""
	I1201 21:17:15.617559  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.617565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:15.617573  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:15.617584  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:15.633049  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:15.633067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:15.719603  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:15.719617  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:15.719628  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:15.795783  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:15.795806  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:15.829611  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:15.829629  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.397343  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:18.407645  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:18.407707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:18.431992  527777 cri.go:89] found id: ""
	I1201 21:17:18.432013  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.432020  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:18.432025  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:18.432082  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:18.456900  527777 cri.go:89] found id: ""
	I1201 21:17:18.456914  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.456921  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:18.456927  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:18.456985  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:18.482130  527777 cri.go:89] found id: ""
	I1201 21:17:18.482144  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.482151  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:18.482156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:18.482216  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:18.506788  527777 cri.go:89] found id: ""
	I1201 21:17:18.506802  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.506809  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:18.506814  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:18.506880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:18.535015  527777 cri.go:89] found id: ""
	I1201 21:17:18.535029  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.535036  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:18.535041  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:18.535102  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:18.561266  527777 cri.go:89] found id: ""
	I1201 21:17:18.561281  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.561288  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:18.561294  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:18.561350  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:18.590006  527777 cri.go:89] found id: ""
	I1201 21:17:18.590020  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.590027  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:18.590034  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:18.590044  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.655626  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:18.655644  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:18.673142  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:18.673158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:18.755072  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:18.755084  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:18.755097  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:18.830997  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:18.831019  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:21.361828  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:21.372633  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:21.372693  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:21.397967  527777 cri.go:89] found id: ""
	I1201 21:17:21.397981  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.398009  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:21.398014  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:21.398083  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:21.424540  527777 cri.go:89] found id: ""
	I1201 21:17:21.424554  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.424570  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:21.424575  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:21.424644  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:21.450905  527777 cri.go:89] found id: ""
	I1201 21:17:21.450920  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.450948  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:21.450954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:21.451029  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:21.483885  527777 cri.go:89] found id: ""
	I1201 21:17:21.483899  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.483906  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:21.483911  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:21.483966  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:21.514135  527777 cri.go:89] found id: ""
	I1201 21:17:21.514149  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.514156  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:21.514162  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:21.514221  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:21.540203  527777 cri.go:89] found id: ""
	I1201 21:17:21.540217  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.540224  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:21.540229  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:21.540285  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:21.570752  527777 cri.go:89] found id: ""
	I1201 21:17:21.570765  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.570772  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:21.570780  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:21.570794  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:21.636631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:21.636651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:21.652498  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:21.652516  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:21.739586  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:21.739597  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:21.739609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:21.815773  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:21.815793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:24.351500  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:24.361669  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:24.361728  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:24.390941  527777 cri.go:89] found id: ""
	I1201 21:17:24.390955  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.390962  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:24.390968  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:24.391024  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:24.416426  527777 cri.go:89] found id: ""
	I1201 21:17:24.416440  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.416448  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:24.416453  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:24.416510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:24.443044  527777 cri.go:89] found id: ""
	I1201 21:17:24.443058  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.443065  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:24.443070  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:24.443182  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:24.468754  527777 cri.go:89] found id: ""
	I1201 21:17:24.468769  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.468776  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:24.468781  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:24.468840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:24.494385  527777 cri.go:89] found id: ""
	I1201 21:17:24.494399  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.494406  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:24.494416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:24.494477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:24.519676  527777 cri.go:89] found id: ""
	I1201 21:17:24.519689  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.519696  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:24.519702  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:24.519761  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:24.546000  527777 cri.go:89] found id: ""
	I1201 21:17:24.546014  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.546021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:24.546028  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:24.546041  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:24.611509  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:24.611529  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:24.626295  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:24.626324  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:24.702708  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:24.702719  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:24.702731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:24.784492  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:24.784514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.320817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:27.331542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:27.331602  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:27.357014  527777 cri.go:89] found id: ""
	I1201 21:17:27.357028  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.357035  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:27.357040  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:27.357098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:27.381792  527777 cri.go:89] found id: ""
	I1201 21:17:27.381806  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.381813  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:27.381818  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:27.381880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:27.407905  527777 cri.go:89] found id: ""
	I1201 21:17:27.407919  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.407927  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:27.407933  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:27.407994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:27.433511  527777 cri.go:89] found id: ""
	I1201 21:17:27.433526  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.433533  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:27.433539  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:27.433596  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:27.459609  527777 cri.go:89] found id: ""
	I1201 21:17:27.459622  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.459629  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:27.459635  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:27.459700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:27.487173  527777 cri.go:89] found id: ""
	I1201 21:17:27.487186  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.487193  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:27.487199  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:27.487257  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:27.512860  527777 cri.go:89] found id: ""
	I1201 21:17:27.512874  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.512881  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:27.512889  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:27.512901  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.541723  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:27.541739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:27.606990  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:27.607009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:27.622689  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:27.622705  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:27.700563  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:27.700573  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:27.700586  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.289250  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:30.300157  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:30.300217  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:30.327373  527777 cri.go:89] found id: ""
	I1201 21:17:30.327394  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.327405  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:30.327420  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:30.327492  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:30.353615  527777 cri.go:89] found id: ""
	I1201 21:17:30.353629  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.353636  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:30.353642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:30.353702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:30.385214  527777 cri.go:89] found id: ""
	I1201 21:17:30.385228  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.385235  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:30.385240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:30.385300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:30.415674  527777 cri.go:89] found id: ""
	I1201 21:17:30.415688  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.415695  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:30.415701  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:30.415767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:30.442641  527777 cri.go:89] found id: ""
	I1201 21:17:30.442656  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.442663  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:30.442668  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:30.442726  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:30.469997  527777 cri.go:89] found id: ""
	I1201 21:17:30.470010  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.470017  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:30.470023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:30.470081  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:30.495554  527777 cri.go:89] found id: ""
	I1201 21:17:30.495570  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.495579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:30.495587  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:30.495599  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:30.559878  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:30.559888  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:30.559899  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.635560  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:30.635581  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:30.673666  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:30.673682  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:30.747787  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:30.747808  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.264623  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:33.276366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:33.276427  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:33.306447  527777 cri.go:89] found id: ""
	I1201 21:17:33.306461  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.306473  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:33.306478  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:33.306538  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:33.334715  527777 cri.go:89] found id: ""
	I1201 21:17:33.334730  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.334738  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:33.334744  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:33.334814  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:33.365674  527777 cri.go:89] found id: ""
	I1201 21:17:33.365690  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.365698  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:33.365705  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:33.365774  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:33.396072  527777 cri.go:89] found id: ""
	I1201 21:17:33.396089  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.396096  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:33.396103  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:33.396175  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:33.429356  527777 cri.go:89] found id: ""
	I1201 21:17:33.429372  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.429381  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:33.429387  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:33.429461  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:33.457917  527777 cri.go:89] found id: ""
	I1201 21:17:33.457932  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.457941  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:33.457948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:33.458022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:33.490167  527777 cri.go:89] found id: ""
	I1201 21:17:33.490182  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.490190  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:33.490199  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:33.490212  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:33.558131  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:33.558155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.575080  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:33.575101  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:33.657808  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:33.657834  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:33.657848  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:33.754296  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:33.754323  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:36.289647  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:36.300774  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:36.300833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:36.327492  527777 cri.go:89] found id: ""
	I1201 21:17:36.327507  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.327514  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:36.327520  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:36.327583  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:36.359515  527777 cri.go:89] found id: ""
	I1201 21:17:36.359529  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.359537  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:36.359542  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:36.359606  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:36.387977  527777 cri.go:89] found id: ""
	I1201 21:17:36.387990  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.387997  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:36.388002  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:36.388058  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:36.413410  527777 cri.go:89] found id: ""
	I1201 21:17:36.413429  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.413436  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:36.413442  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:36.413499  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:36.440588  527777 cri.go:89] found id: ""
	I1201 21:17:36.440614  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.440622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:36.440627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:36.440698  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:36.471404  527777 cri.go:89] found id: ""
	I1201 21:17:36.471419  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.471427  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:36.471433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:36.471500  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:36.499502  527777 cri.go:89] found id: ""
	I1201 21:17:36.499518  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.499528  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:36.499536  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:36.499546  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:36.568027  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:36.568052  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:36.584561  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:36.584580  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:36.665718  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:36.665728  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:36.665740  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:36.748791  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:36.748812  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.285189  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:39.296369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:39.296438  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:39.323280  527777 cri.go:89] found id: ""
	I1201 21:17:39.323294  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.323306  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:39.323312  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:39.323379  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:39.352092  527777 cri.go:89] found id: ""
	I1201 21:17:39.352107  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.352115  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:39.352120  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:39.352187  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:39.379352  527777 cri.go:89] found id: ""
	I1201 21:17:39.379367  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.379375  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:39.379382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:39.379446  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:39.406925  527777 cri.go:89] found id: ""
	I1201 21:17:39.406940  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.406947  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:39.406954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:39.407022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:39.434427  527777 cri.go:89] found id: ""
	I1201 21:17:39.434442  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.434450  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:39.434455  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:39.434521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:39.466725  527777 cri.go:89] found id: ""
	I1201 21:17:39.466741  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.466748  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:39.466755  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:39.466821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:39.494952  527777 cri.go:89] found id: ""
	I1201 21:17:39.494968  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.494976  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:39.494985  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:39.494998  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:39.510984  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:39.511002  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:39.585968  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:39.585981  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:39.585993  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:39.669009  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:39.669033  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.705170  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:39.705189  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:42.275450  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:42.287572  527777 kubeadm.go:602] duration metric: took 4m1.888207918s to restartPrimaryControlPlane
	W1201 21:17:42.287658  527777 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1201 21:17:42.287747  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:17:42.711674  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:17:42.725511  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:17:42.734239  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:17:42.734308  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:17:42.743050  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:17:42.743060  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:17:42.743120  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:17:42.751678  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:17:42.751731  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:17:42.759481  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:17:42.767903  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:17:42.767964  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:17:42.776067  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.784283  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:17:42.784355  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.792582  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:17:42.801449  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:17:42.801518  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:17:42.809783  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:17:42.849635  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:17:42.849689  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:17:42.929073  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:17:42.929165  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:17:42.929199  527777 kubeadm.go:319] OS: Linux
	I1201 21:17:42.929243  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:17:42.929296  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:17:42.929342  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:17:42.929388  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:17:42.929435  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:17:42.929482  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:17:42.929526  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:17:42.929573  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:17:42.929617  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:17:43.002025  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:17:43.002165  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:17:43.002258  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:17:43.013458  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:17:43.017000  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:17:43.017095  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:17:43.017170  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:17:43.017252  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:17:43.017311  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:17:43.017379  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:17:43.017434  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:17:43.017501  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:17:43.017561  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:17:43.017634  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:17:43.017705  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:17:43.017832  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:17:43.017892  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:17:43.133992  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:17:43.467350  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:17:43.613021  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:17:43.910424  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:17:44.196121  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:17:44.196632  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:17:44.199145  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:17:44.202480  527777 out.go:252]   - Booting up control plane ...
	I1201 21:17:44.202575  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:17:44.202651  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:17:44.202718  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:17:44.217388  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:17:44.217714  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:17:44.228031  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:17:44.228400  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:17:44.228517  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:17:44.357408  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:17:44.357522  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:21:44.357404  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000240491s
	I1201 21:21:44.357429  527777 kubeadm.go:319] 
	I1201 21:21:44.357487  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:21:44.357523  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:21:44.357633  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:21:44.357637  527777 kubeadm.go:319] 
	I1201 21:21:44.357830  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:21:44.357863  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:21:44.357893  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:21:44.357896  527777 kubeadm.go:319] 
	I1201 21:21:44.361511  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.361943  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:44.362051  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:21:44.362287  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:21:44.362292  527777 kubeadm.go:319] 
	I1201 21:21:44.362361  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1201 21:21:44.362491  527777 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240491s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1201 21:21:44.362579  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:21:44.772977  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:21:44.786214  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:21:44.786270  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:21:44.794556  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:21:44.794568  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:21:44.794622  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:21:44.803048  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:21:44.803106  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:21:44.810695  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:21:44.818882  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:21:44.818947  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:21:44.827077  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.834936  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:21:44.834995  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.843074  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:21:44.851084  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:21:44.851166  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:21:44.858721  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:21:44.981319  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.981788  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:45.157392  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:25:46.243317  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:25:46.243344  527777 kubeadm.go:319] 
	I1201 21:25:46.243413  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 21:25:46.246817  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:25:46.246871  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:25:46.246962  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:25:46.247022  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:25:46.247057  527777 kubeadm.go:319] OS: Linux
	I1201 21:25:46.247100  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:25:46.247175  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:25:46.247246  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:25:46.247312  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:25:46.247369  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:25:46.247421  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:25:46.247464  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:25:46.247511  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:25:46.247555  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:25:46.247626  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:25:46.247719  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:25:46.247811  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:25:46.247872  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:25:46.250950  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:25:46.251041  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:25:46.251105  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:25:46.251224  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:25:46.251290  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:25:46.251369  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:25:46.251431  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:25:46.251495  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:25:46.251555  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:25:46.251629  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:25:46.251704  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:25:46.251741  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:25:46.251795  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:25:46.251845  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:25:46.251899  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:25:46.251951  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:25:46.252012  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:25:46.252065  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:25:46.252149  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:25:46.252213  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:25:46.255065  527777 out.go:252]   - Booting up control plane ...
	I1201 21:25:46.255213  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:25:46.255292  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:25:46.255359  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:25:46.255466  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:25:46.255590  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:25:46.255713  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:25:46.255816  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:25:46.255856  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:25:46.256011  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:25:46.256134  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:25:46.256200  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000272278s
	I1201 21:25:46.256203  527777 kubeadm.go:319] 
	I1201 21:25:46.256259  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:25:46.256290  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:25:46.256400  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:25:46.256404  527777 kubeadm.go:319] 
	I1201 21:25:46.256508  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:25:46.256540  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:25:46.256569  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:25:46.256592  527777 kubeadm.go:319] 
	I1201 21:25:46.256631  527777 kubeadm.go:403] duration metric: took 12m5.895739008s to StartCluster
	I1201 21:25:46.256661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:25:46.256721  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:25:46.286008  527777 cri.go:89] found id: ""
	I1201 21:25:46.286022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.286029  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:25:46.286034  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:25:46.286096  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:25:46.311936  527777 cri.go:89] found id: ""
	I1201 21:25:46.311950  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.311957  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:25:46.311963  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:25:46.312022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:25:46.338008  527777 cri.go:89] found id: ""
	I1201 21:25:46.338022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.338029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:25:46.338035  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:25:46.338094  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:25:46.364430  527777 cri.go:89] found id: ""
	I1201 21:25:46.364446  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.364453  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:25:46.364459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:25:46.364519  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:25:46.390553  527777 cri.go:89] found id: ""
	I1201 21:25:46.390568  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.390574  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:25:46.390580  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:25:46.390638  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:25:46.416135  527777 cri.go:89] found id: ""
	I1201 21:25:46.416149  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.416156  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:25:46.416161  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:25:46.416215  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:25:46.441110  527777 cri.go:89] found id: ""
	I1201 21:25:46.441124  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.441131  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:25:46.441139  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:25:46.441160  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:25:46.456311  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:25:46.456328  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:25:46.535568  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:25:46.535579  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:25:46.535591  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:25:46.613336  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:25:46.613357  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:25:46.643384  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:25:46.643410  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1201 21:25:46.714793  527777 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1201 21:25:46.714844  527777 out.go:285] * 
	W1201 21:25:46.714913  527777 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.714940  527777 out.go:285] * 
	W1201 21:25:46.717121  527777 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:25:46.722121  527777 out.go:203] 
	W1201 21:25:46.725981  527777 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.726037  527777 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1201 21:25:46.726060  527777 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1201 21:25:46.729457  527777 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:25:55 functional-198694 crio[10476]: time="2025-12-01T21:25:55.958248239Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=60b0690d-119a-4b74-971b-527f5644551b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002277819Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002458196Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002508862Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.001788017Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=1eef1f83-f41d-4072-8efe-21875777fc46 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038212447Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038444368Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038497101Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.071982198Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.072139634Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.072179354Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.449575902Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=9c839e7f-fb5f-4968-a4bc-4d98c332783b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.503799057Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.504091628Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.504215777Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534112776Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534244342Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534283193Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.532335101Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=cd62b92b-638f-4a1a-ae2d-ff287a877bee name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56735868Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56751706Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56755705Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.61237743Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.612552228Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.612623471Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:27:49.836868   23921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:49.837508   23921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:49.839011   23921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:49.839435   23921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:49.840961   23921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:27:49 up  3:10,  0 user,  load average: 0.79, 0.45, 0.47
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:27:47 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:47 functional-198694 kubelet[23765]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:47 functional-198694 kubelet[23765]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:47 functional-198694 kubelet[23765]: E1201 21:27:47.717024   23765 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:27:47 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:27:47 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:27:48 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 803.
	Dec 01 21:27:48 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:48 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:48 functional-198694 kubelet[23801]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:48 functional-198694 kubelet[23801]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:48 functional-198694 kubelet[23801]: E1201 21:27:48.471249   23801 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:27:48 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:27:48 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:27:49 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 804.
	Dec 01 21:27:49 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:49 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:49 functional-198694 kubelet[23836]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:49 functional-198694 kubelet[23836]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:49 functional-198694 kubelet[23836]: E1201 21:27:49.221841   23836 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:27:49 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:27:49 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:27:49 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 805.
	Dec 01 21:27:49 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:49 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (374.917548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-198694 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-198694 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (79.226809ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-198694 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-198694 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-198694 describe po hello-node-connect: exit status 1 (69.063779ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-198694 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-198694 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-198694 logs -l app=hello-node-connect: exit status 1 (73.399954ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-198694 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-198694 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-198694 describe svc hello-node-connect: exit status 1 (67.665634ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-198694 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (301.40922ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-198694 ssh sudo cat /etc/ssl/certs/486002.pem                                                                                                  │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ image   │ functional-198694 image ls                                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /usr/share/ca-certificates/486002.pem                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ image   │ functional-198694 image save kicbase/echo-server:functional-198694 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ image   │ functional-198694 image rm kicbase/echo-server:functional-198694 --alsologtostderr                                                                        │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /etc/ssl/certs/4860022.pem                                                                                                 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ image   │ functional-198694 image ls                                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /usr/share/ca-certificates/4860022.pem                                                                                     │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:26 UTC │
	│ image   │ functional-198694 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:26 UTC │
	│ image   │ functional-198694 image save --daemon kicbase/echo-server:functional-198694 --alsologtostderr                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ ssh     │ functional-198694 ssh sudo cat /etc/test/nested/copy/486002/hosts                                                                                         │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ service │ functional-198694 service list                                                                                                                            │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ ssh     │ functional-198694 ssh echo hello                                                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ service │ functional-198694 service list -o json                                                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ ssh     │ functional-198694 ssh cat /etc/hostname                                                                                                                   │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │ 01 Dec 25 21:26 UTC │
	│ service │ functional-198694 service --namespace=default --https --url hello-node                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ tunnel  │ functional-198694 tunnel --alsologtostderr                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ tunnel  │ functional-198694 tunnel --alsologtostderr                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ service │ functional-198694 service hello-node --url --format={{.IP}}                                                                                               │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ service │ functional-198694 service hello-node --url                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ tunnel  │ functional-198694 tunnel --alsologtostderr                                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:26 UTC │                     │
	│ addons  │ functional-198694 addons list                                                                                                                             │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ addons  │ functional-198694 addons list -o json                                                                                                                     │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:13:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:13:35.338314  527777 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:13:35.338426  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.338431  527777 out.go:374] Setting ErrFile to fd 2...
	I1201 21:13:35.338435  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.339011  527777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:13:35.339669  527777 out.go:368] Setting JSON to false
	I1201 21:13:35.340628  527777 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10565,"bootTime":1764613051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:13:35.340767  527777 start.go:143] virtualization:  
	I1201 21:13:35.344231  527777 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:13:35.348003  527777 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:13:35.348182  527777 notify.go:221] Checking for updates...
	I1201 21:13:35.353585  527777 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:13:35.356421  527777 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:13:35.359084  527777 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:13:35.361859  527777 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:13:35.364606  527777 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:13:35.367906  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:35.368004  527777 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:13:35.404299  527777 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:13:35.404422  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.463515  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.453981974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.463609  527777 docker.go:319] overlay module found
	I1201 21:13:35.466875  527777 out.go:179] * Using the docker driver based on existing profile
	I1201 21:13:35.469781  527777 start.go:309] selected driver: docker
	I1201 21:13:35.469793  527777 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.469882  527777 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:13:35.469988  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.530406  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.520549629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.530815  527777 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 21:13:35.530841  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:35.530897  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:35.530938  527777 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.534086  527777 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:13:35.536995  527777 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:13:35.539929  527777 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:13:35.542786  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:35.542873  527777 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:13:35.563189  527777 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:13:35.563200  527777 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:13:35.608993  527777 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:13:35.806403  527777 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:13:35.806571  527777 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:13:35.806600  527777 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806692  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:13:35.806702  527777 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.653µs
	I1201 21:13:35.806710  527777 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:13:35.806721  527777 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806753  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:13:35.806758  527777 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 38.825µs
	I1201 21:13:35.806764  527777 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806774  527777 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806815  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:13:35.806831  527777 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 48.901µs
	I1201 21:13:35.806838  527777 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806850  527777 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:13:35.806851  527777 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806885  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:13:35.806880  527777 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806893  527777 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 44.405µs
	I1201 21:13:35.806899  527777 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806914  527777 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806939  527777 start.go:364] duration metric: took 38.547µs to acquireMachinesLock for "functional-198694"
	I1201 21:13:35.806944  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:13:35.806949  527777 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 42.124µs
	I1201 21:13:35.806954  527777 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806962  527777 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:13:35.806968  527777 fix.go:54] fixHost starting: 
	I1201 21:13:35.806963  527777 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806991  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:13:35.806995  527777 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.558µs
	I1201 21:13:35.807007  527777 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:13:35.807016  527777 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807045  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:13:35.807049  527777 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.657µs
	I1201 21:13:35.807054  527777 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:13:35.807062  527777 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807089  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:13:35.807094  527777 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.54µs
	I1201 21:13:35.807099  527777 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:13:35.807107  527777 cache.go:87] Successfully saved all images to host disk.
	I1201 21:13:35.807314  527777 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:13:35.826290  527777 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:13:35.826315  527777 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:13:35.829729  527777 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:13:35.829761  527777 machine.go:94] provisionDockerMachine start ...
	I1201 21:13:35.829853  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:35.849270  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:35.849646  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:35.849655  527777 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:13:36.014195  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.014211  527777 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:13:36.014280  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.035339  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.035672  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.035681  527777 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:13:36.197202  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.197287  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.217632  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.217935  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.217948  527777 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:13:36.367610  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:13:36.367629  527777 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:13:36.367658  527777 ubuntu.go:190] setting up certificates
	I1201 21:13:36.367666  527777 provision.go:84] configureAuth start
	I1201 21:13:36.367747  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:36.387555  527777 provision.go:143] copyHostCerts
	I1201 21:13:36.387627  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:13:36.387641  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:13:36.387724  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:13:36.387835  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:13:36.387840  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:13:36.387866  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:13:36.387928  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:13:36.387933  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:13:36.387959  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:13:36.388014  527777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:13:36.864413  527777 provision.go:177] copyRemoteCerts
	I1201 21:13:36.864488  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:13:36.864542  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.883147  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:36.987572  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:13:37.015924  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:13:37.037590  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 21:13:37.056483  527777 provision.go:87] duration metric: took 688.787749ms to configureAuth
	I1201 21:13:37.056502  527777 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:13:37.056696  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:37.056802  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.075104  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:37.075454  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:37.075468  527777 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:13:37.432424  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:13:37.432439  527777 machine.go:97] duration metric: took 1.602671146s to provisionDockerMachine
	I1201 21:13:37.432451  527777 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:13:37.432466  527777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:13:37.432544  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:13:37.432606  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.457485  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.563609  527777 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:13:37.567292  527777 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:13:37.567310  527777 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:13:37.567329  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:13:37.567430  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:13:37.567517  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:13:37.567613  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:13:37.567670  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:13:37.575725  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:37.593481  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:13:37.611620  527777 start.go:296] duration metric: took 179.151488ms for postStartSetup
	I1201 21:13:37.611718  527777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:13:37.611798  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.629587  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.732362  527777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:13:37.737388  527777 fix.go:56] duration metric: took 1.930412863s for fixHost
	I1201 21:13:37.737414  527777 start.go:83] releasing machines lock for "functional-198694", held for 1.930466515s
	I1201 21:13:37.737492  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:37.754641  527777 ssh_runner.go:195] Run: cat /version.json
	I1201 21:13:37.754685  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.754954  527777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:13:37.755010  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.773486  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.787845  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.875124  527777 ssh_runner.go:195] Run: systemctl --version
	I1201 21:13:37.974016  527777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:13:38.017000  527777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 21:13:38.021875  527777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:13:38.021957  527777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:13:38.031594  527777 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:13:38.031622  527777 start.go:496] detecting cgroup driver to use...
	I1201 21:13:38.031660  527777 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:13:38.031747  527777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:13:38.049187  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:13:38.064637  527777 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:13:38.064721  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:13:38.083239  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:13:38.097453  527777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:13:38.249215  527777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:13:38.371691  527777 docker.go:234] disabling docker service ...
	I1201 21:13:38.371769  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:13:38.388782  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:13:38.402306  527777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:13:38.513914  527777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:13:38.630153  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:13:38.644475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:13:38.658966  527777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:13:38.659023  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.668135  527777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:13:38.668192  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.677509  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.686682  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.695781  527777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:13:38.704147  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.713420  527777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.722196  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.731481  527777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:13:38.740144  527777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:13:38.748176  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:38.858298  527777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:13:39.035375  527777 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:13:39.035464  527777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:13:39.039668  527777 start.go:564] Will wait 60s for crictl version
	I1201 21:13:39.039730  527777 ssh_runner.go:195] Run: which crictl
	I1201 21:13:39.043260  527777 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:13:39.078386  527777 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:13:39.078499  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.110667  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.146750  527777 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:13:39.149800  527777 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:13:39.166717  527777 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:13:39.173972  527777 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1201 21:13:39.176755  527777 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:13:39.176898  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:39.176968  527777 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:13:39.210945  527777 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:13:39.210958  527777 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:13:39.210965  527777 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:13:39.211070  527777 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:13:39.211187  527777 ssh_runner.go:195] Run: crio config
	I1201 21:13:39.284437  527777 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1201 21:13:39.284481  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:39.284491  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:39.284499  527777 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:13:39.284522  527777 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:13:39.284675  527777 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:13:39.284759  527777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:13:39.293198  527777 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:13:39.293275  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:13:39.301290  527777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:13:39.315108  527777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:13:39.329814  527777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1201 21:13:39.343669  527777 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:13:39.347900  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:39.461077  527777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:13:39.654352  527777 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:13:39.654364  527777 certs.go:195] generating shared ca certs ...
	I1201 21:13:39.654379  527777 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:13:39.654515  527777 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:13:39.654555  527777 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:13:39.654570  527777 certs.go:257] generating profile certs ...
	I1201 21:13:39.654666  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:13:39.654727  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:13:39.654771  527777 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:13:39.654890  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:13:39.654921  527777 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:13:39.654928  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:13:39.654965  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:13:39.655015  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:13:39.655038  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:13:39.655084  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:39.655762  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:13:39.683427  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:13:39.704542  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:13:39.724282  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:13:39.744046  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:13:39.765204  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:13:39.784677  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:13:39.803885  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:13:39.822965  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:13:39.842026  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:13:39.860451  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:13:39.879380  527777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:13:39.893847  527777 ssh_runner.go:195] Run: openssl version
	I1201 21:13:39.900456  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:13:39.910454  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914599  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914672  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.957573  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:13:39.966576  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:13:39.976178  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980649  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980729  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:13:40.025575  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:13:40.037195  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:13:40.047283  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051903  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051976  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.094396  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:13:40.103155  527777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:13:40.107392  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:13:40.150081  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:13:40.192825  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:13:40.234772  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:13:40.276722  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:13:40.318487  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:13:40.360912  527777 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:40.361001  527777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:13:40.361062  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.390972  527777 cri.go:89] found id: ""
	I1201 21:13:40.391046  527777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:13:40.399343  527777 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:13:40.399354  527777 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:13:40.399410  527777 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:13:40.407260  527777 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.407785  527777 kubeconfig.go:125] found "functional-198694" server: "https://192.168.49.2:8441"
	I1201 21:13:40.409130  527777 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:13:40.418081  527777 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-01 20:59:03.175067800 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-01 21:13:39.337074315 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1201 21:13:40.418090  527777 kubeadm.go:1161] stopping kube-system containers ...
	I1201 21:13:40.418103  527777 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 21:13:40.418160  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.458573  527777 cri.go:89] found id: ""
	I1201 21:13:40.458639  527777 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 21:13:40.477506  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:13:40.486524  527777 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  1 21:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  1 21:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  1 21:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  1 21:03 /etc/kubernetes/scheduler.conf
	
	I1201 21:13:40.486611  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:13:40.494590  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:13:40.502887  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.502952  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:13:40.511354  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.519815  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.519872  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.528897  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:13:40.537744  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.537819  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:13:40.546165  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:13:40.555103  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:40.603848  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:41.842196  527777 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.238322261s)
	I1201 21:13:41.842271  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.059194  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.130722  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.199813  527777 api_server.go:52] waiting for apiserver process to appear ...
	I1201 21:13:42.199901  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:42.700072  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.200731  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.700027  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.200776  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.700945  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.200498  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.700869  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.200358  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.700900  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.200833  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.700432  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.200342  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.700205  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.200031  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.700873  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.200171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.700532  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.199969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.700026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.200123  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.700046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.200038  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.700680  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.700097  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.200910  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.700336  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.200957  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.700757  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.200131  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.700100  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.200357  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.700032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.200053  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.700687  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.202701  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.700294  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.200032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.700969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.200893  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.700398  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.200784  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.701004  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.200950  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.200806  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.700896  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.200904  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.700082  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.200046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.700894  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.200914  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.700874  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.200345  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.700662  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.200989  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.700974  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.200085  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.200389  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.200064  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.700099  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.200140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.699984  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.200508  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.700076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.200220  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.200107  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.201026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.700092  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.200816  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.700821  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.200768  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.700817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.200081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.700135  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.200076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.700140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.200109  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.700040  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.700221  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.200360  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.700585  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.200737  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.700431  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.200635  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.699983  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.200340  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.700127  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.200075  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.700352  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.200740  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.700086  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.200338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.200785  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.700903  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.200627  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.700920  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.700285  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.200800  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.200091  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.700843  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.200016  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.700190  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.700171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.200767  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.700973  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.200048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.700746  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.200808  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.700037  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:42.200288  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:42.200384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:42.231074  527777 cri.go:89] found id: ""
	I1201 21:14:42.231090  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.231099  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:42.231105  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:42.231205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:42.260877  527777 cri.go:89] found id: ""
	I1201 21:14:42.260892  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.260900  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:42.260906  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:42.260972  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:42.290930  527777 cri.go:89] found id: ""
	I1201 21:14:42.290944  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.290953  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:42.290960  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:42.291034  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:42.323761  527777 cri.go:89] found id: ""
	I1201 21:14:42.323776  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.323784  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:42.323790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:42.323870  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:42.356722  527777 cri.go:89] found id: ""
	I1201 21:14:42.356738  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.356748  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:42.356756  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:42.356820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:42.387639  527777 cri.go:89] found id: ""
	I1201 21:14:42.387654  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.387661  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:42.387667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:42.387738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:42.433777  527777 cri.go:89] found id: ""
	I1201 21:14:42.433791  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.433798  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:42.433806  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:42.433815  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:42.520716  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:42.520743  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:42.536803  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:42.536820  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:42.605090  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:42.605114  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:42.605125  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:42.679935  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:42.679957  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:45.213941  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:45.229905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:45.229984  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:45.276158  527777 cri.go:89] found id: ""
	I1201 21:14:45.276174  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.276181  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:45.276187  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:45.276259  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:45.307844  527777 cri.go:89] found id: ""
	I1201 21:14:45.307859  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.307867  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:45.307872  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:45.307946  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:45.339831  527777 cri.go:89] found id: ""
	I1201 21:14:45.339845  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.339853  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:45.339858  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:45.339922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:45.371617  527777 cri.go:89] found id: ""
	I1201 21:14:45.371632  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.371640  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:45.371646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:45.371705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:45.399984  527777 cri.go:89] found id: ""
	I1201 21:14:45.400005  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.400012  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:45.400017  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:45.400086  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:45.441742  527777 cri.go:89] found id: ""
	I1201 21:14:45.441755  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.441763  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:45.441769  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:45.441843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:45.474201  527777 cri.go:89] found id: ""
	I1201 21:14:45.474216  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.474223  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:45.474231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:45.474241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:45.541899  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:45.541920  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:45.557525  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:45.557541  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:45.623123  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:45.623165  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:45.623176  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:45.703324  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:45.703344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.232324  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:48.242709  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:48.242767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:48.273768  527777 cri.go:89] found id: ""
	I1201 21:14:48.273782  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.273790  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:48.273795  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:48.273853  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:48.305133  527777 cri.go:89] found id: ""
	I1201 21:14:48.305147  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.305154  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:48.305159  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:48.305218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:48.331706  527777 cri.go:89] found id: ""
	I1201 21:14:48.331720  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.331727  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:48.331733  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:48.331805  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:48.357401  527777 cri.go:89] found id: ""
	I1201 21:14:48.357414  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.357421  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:48.357426  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:48.357485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:48.382601  527777 cri.go:89] found id: ""
	I1201 21:14:48.382615  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.382622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:48.382627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:48.382685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:48.414103  527777 cri.go:89] found id: ""
	I1201 21:14:48.414117  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.414124  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:48.414130  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:48.414192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:48.444275  527777 cri.go:89] found id: ""
	I1201 21:14:48.444289  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.444296  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:48.444304  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:48.444315  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:48.509613  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:48.509633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:48.509645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:48.583849  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:48.583868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.611095  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:48.611113  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:48.678045  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:48.678067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.193681  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:51.204158  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:51.204220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:51.228546  527777 cri.go:89] found id: ""
	I1201 21:14:51.228560  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.228567  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:51.228573  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:51.228641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:51.253363  527777 cri.go:89] found id: ""
	I1201 21:14:51.253377  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.253384  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:51.253389  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:51.253450  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:51.281388  527777 cri.go:89] found id: ""
	I1201 21:14:51.281403  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.281410  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:51.281415  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:51.281472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:51.312321  527777 cri.go:89] found id: ""
	I1201 21:14:51.312334  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.312341  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:51.312347  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:51.312404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:51.338071  527777 cri.go:89] found id: ""
	I1201 21:14:51.338084  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.338092  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:51.338097  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:51.338160  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:51.362911  527777 cri.go:89] found id: ""
	I1201 21:14:51.362925  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.362932  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:51.362938  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:51.362996  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:51.392560  527777 cri.go:89] found id: ""
	I1201 21:14:51.392575  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.392582  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:51.392589  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:51.392600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:51.462446  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:51.462465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.483328  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:51.483345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:51.550537  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:51.550546  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:51.550556  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:51.627463  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:51.627484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:54.160747  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:54.171038  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:54.171098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:54.197306  527777 cri.go:89] found id: ""
	I1201 21:14:54.197320  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.197327  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:54.197333  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:54.197389  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:54.227205  527777 cri.go:89] found id: ""
	I1201 21:14:54.227219  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.227226  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:54.227232  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:54.227293  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:54.254126  527777 cri.go:89] found id: ""
	I1201 21:14:54.254141  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.254149  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:54.254156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:54.254218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:54.282152  527777 cri.go:89] found id: ""
	I1201 21:14:54.282166  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.282173  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:54.282178  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:54.282234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:54.312220  527777 cri.go:89] found id: ""
	I1201 21:14:54.312234  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.312241  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:54.312246  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:54.312314  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:54.338233  527777 cri.go:89] found id: ""
	I1201 21:14:54.338247  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.338253  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:54.338259  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:54.338317  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:54.364068  527777 cri.go:89] found id: ""
	I1201 21:14:54.364082  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.364089  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:54.364097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:54.364119  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:54.429655  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:54.429673  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:54.445696  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:54.445712  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:54.514079  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:54.514090  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:54.514100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:54.590504  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:54.590526  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.119842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:57.129802  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:57.129862  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:57.154250  527777 cri.go:89] found id: ""
	I1201 21:14:57.154263  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.154271  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:57.154276  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:57.154332  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:57.179738  527777 cri.go:89] found id: ""
	I1201 21:14:57.179761  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.179768  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:57.179775  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:57.179838  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:57.209881  527777 cri.go:89] found id: ""
	I1201 21:14:57.209895  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.209902  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:57.209907  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:57.209964  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:57.239761  527777 cri.go:89] found id: ""
	I1201 21:14:57.239775  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.239782  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:57.239787  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:57.239851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:57.265438  527777 cri.go:89] found id: ""
	I1201 21:14:57.265457  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.265464  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:57.265470  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:57.265531  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:57.292117  527777 cri.go:89] found id: ""
	I1201 21:14:57.292131  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.292139  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:57.292145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:57.292211  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:57.321507  527777 cri.go:89] found id: ""
	I1201 21:14:57.321526  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.321539  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:57.321547  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:57.321562  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.355489  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:57.355506  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:57.422253  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:57.422274  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:57.439866  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:57.439884  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:57.517974  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:57.517984  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:57.517997  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.095116  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:00.167383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:00.167484  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:00.305857  527777 cri.go:89] found id: ""
	I1201 21:15:00.305874  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.305881  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:00.305888  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:00.305960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:00.412948  527777 cri.go:89] found id: ""
	I1201 21:15:00.412964  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.412972  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:00.412979  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:00.413063  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:00.497486  527777 cri.go:89] found id: ""
	I1201 21:15:00.497503  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.497511  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:00.497517  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:00.497588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:00.548544  527777 cri.go:89] found id: ""
	I1201 21:15:00.548558  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.548565  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:00.548571  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:00.548635  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:00.594658  527777 cri.go:89] found id: ""
	I1201 21:15:00.594674  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.594682  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:00.594688  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:00.594758  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:00.625642  527777 cri.go:89] found id: ""
	I1201 21:15:00.625658  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.625665  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:00.625672  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:00.625741  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:00.657944  527777 cri.go:89] found id: ""
	I1201 21:15:00.657968  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.657977  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:00.657987  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:00.657999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:00.741394  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:00.741407  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:00.741425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.821320  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:00.821344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:00.857348  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:00.857380  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:00.927631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:00.927652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.446387  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:03.456673  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:03.456742  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:03.481752  527777 cri.go:89] found id: ""
	I1201 21:15:03.481766  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.481773  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:03.481779  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:03.481837  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:03.509959  527777 cri.go:89] found id: ""
	I1201 21:15:03.509974  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.509982  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:03.509987  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:03.510050  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:03.536645  527777 cri.go:89] found id: ""
	I1201 21:15:03.536659  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.536665  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:03.536671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:03.536738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:03.562917  527777 cri.go:89] found id: ""
	I1201 21:15:03.562932  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.562939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:03.562945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:03.563005  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:03.589891  527777 cri.go:89] found id: ""
	I1201 21:15:03.589905  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.589912  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:03.589918  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:03.589977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:03.622362  527777 cri.go:89] found id: ""
	I1201 21:15:03.622376  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.622384  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:03.622390  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:03.622451  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:03.649882  527777 cri.go:89] found id: ""
	I1201 21:15:03.649897  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.649904  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:03.649912  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:03.649922  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:03.726812  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:03.726832  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.741643  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:03.741659  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:03.807830  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:03.807840  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:03.807851  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:03.882248  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:03.882268  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.412792  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:06.423457  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:06.423520  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:06.450416  527777 cri.go:89] found id: ""
	I1201 21:15:06.450434  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.450441  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:06.450461  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:06.450552  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:06.476229  527777 cri.go:89] found id: ""
	I1201 21:15:06.476243  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.476251  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:06.476257  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:06.476313  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:06.504311  527777 cri.go:89] found id: ""
	I1201 21:15:06.504326  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.504333  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:06.504339  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:06.504400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:06.531500  527777 cri.go:89] found id: ""
	I1201 21:15:06.531515  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.531523  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:06.531529  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:06.531598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:06.557205  527777 cri.go:89] found id: ""
	I1201 21:15:06.557219  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.557226  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:06.557231  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:06.557296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:06.583224  527777 cri.go:89] found id: ""
	I1201 21:15:06.583237  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.583244  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:06.583250  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:06.583309  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:06.609560  527777 cri.go:89] found id: ""
	I1201 21:15:06.609574  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.609581  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:06.609589  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:06.609600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:06.688119  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:06.688138  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.718171  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:06.718187  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:06.788360  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:06.788382  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:06.803516  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:06.803532  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:06.871576  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.373262  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:09.384129  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:09.384191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:09.415353  527777 cri.go:89] found id: ""
	I1201 21:15:09.415369  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.415377  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:09.415384  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:09.415449  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:09.441666  527777 cri.go:89] found id: ""
	I1201 21:15:09.441681  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.441689  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:09.441707  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:09.441773  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:09.468735  527777 cri.go:89] found id: ""
	I1201 21:15:09.468749  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.468756  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:09.468761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:09.468820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:09.495871  527777 cri.go:89] found id: ""
	I1201 21:15:09.495885  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.495892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:09.495898  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:09.495960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:09.522124  527777 cri.go:89] found id: ""
	I1201 21:15:09.522138  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.522145  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:09.522151  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:09.522222  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:09.548540  527777 cri.go:89] found id: ""
	I1201 21:15:09.548554  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.548562  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:09.548568  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:09.548628  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:09.581799  527777 cri.go:89] found id: ""
	I1201 21:15:09.581814  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.581823  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:09.581831  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:09.581842  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:09.653172  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:09.653196  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:09.668649  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:09.668666  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:09.742062  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.742072  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:09.742085  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:09.817239  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:09.817259  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.348410  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:12.358969  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:12.359036  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:12.384762  527777 cri.go:89] found id: ""
	I1201 21:15:12.384776  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.384783  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:12.384788  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:12.384849  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:12.411423  527777 cri.go:89] found id: ""
	I1201 21:15:12.411437  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.411444  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:12.411449  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:12.411508  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:12.436624  527777 cri.go:89] found id: ""
	I1201 21:15:12.436638  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.436645  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:12.436650  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:12.436708  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:12.462632  527777 cri.go:89] found id: ""
	I1201 21:15:12.462647  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.462654  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:12.462661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:12.462724  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:12.488511  527777 cri.go:89] found id: ""
	I1201 21:15:12.488526  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.488537  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:12.488542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:12.488601  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:12.514421  527777 cri.go:89] found id: ""
	I1201 21:15:12.514434  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.514441  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:12.514448  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:12.514513  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:12.541557  527777 cri.go:89] found id: ""
	I1201 21:15:12.541571  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.541579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:12.541587  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:12.541598  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.573231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:12.573249  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:12.641686  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:12.641707  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:12.658713  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:12.658727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:12.743144  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:12.743155  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:12.743166  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.318465  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:15.329023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:15.329088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:15.358063  527777 cri.go:89] found id: ""
	I1201 21:15:15.358077  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.358084  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:15.358090  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:15.358148  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:15.387949  527777 cri.go:89] found id: ""
	I1201 21:15:15.387963  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.387971  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:15.387976  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:15.388040  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:15.414396  527777 cri.go:89] found id: ""
	I1201 21:15:15.414412  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.414420  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:15.414425  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:15.414489  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:15.440368  527777 cri.go:89] found id: ""
	I1201 21:15:15.440383  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.440390  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:15.440396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:15.440455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:15.471515  527777 cri.go:89] found id: ""
	I1201 21:15:15.471529  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.471538  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:15.471544  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:15.471605  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:15.502736  527777 cri.go:89] found id: ""
	I1201 21:15:15.502750  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.502764  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:15.502770  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:15.502834  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:15.530525  527777 cri.go:89] found id: ""
	I1201 21:15:15.530540  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.530548  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:15.530555  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:15.530566  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:15.597211  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:15.597221  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:15.597232  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.673960  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:15.673983  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:15.708635  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:15.708651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:15.779672  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:15.779693  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.296490  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:18.307184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:18.307258  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:18.340992  527777 cri.go:89] found id: ""
	I1201 21:15:18.341006  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.341021  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:18.341027  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:18.341093  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:18.370602  527777 cri.go:89] found id: ""
	I1201 21:15:18.370626  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.370633  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:18.370642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:18.370713  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:18.398425  527777 cri.go:89] found id: ""
	I1201 21:15:18.398440  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.398447  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:18.398453  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:18.398527  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:18.424514  527777 cri.go:89] found id: ""
	I1201 21:15:18.424530  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.424537  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:18.424561  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:18.424641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:18.451718  527777 cri.go:89] found id: ""
	I1201 21:15:18.451732  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.451740  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:18.451746  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:18.451806  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:18.481779  527777 cri.go:89] found id: ""
	I1201 21:15:18.481804  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.481812  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:18.481818  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:18.481885  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:18.509744  527777 cri.go:89] found id: ""
	I1201 21:15:18.509760  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.509767  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:18.509775  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:18.509800  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:18.541318  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:18.541335  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:18.608586  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:18.608608  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.625859  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:18.625885  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:18.721362  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:18.721371  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:18.721383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.298842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:21.309420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:21.309481  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:21.339650  527777 cri.go:89] found id: ""
	I1201 21:15:21.339664  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.339672  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:21.339678  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:21.339739  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:21.369828  527777 cri.go:89] found id: ""
	I1201 21:15:21.369843  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.369850  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:21.369857  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:21.369925  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:21.396833  527777 cri.go:89] found id: ""
	I1201 21:15:21.396860  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.396868  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:21.396874  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:21.396948  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:21.423340  527777 cri.go:89] found id: ""
	I1201 21:15:21.423354  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.423363  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:21.423369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:21.423429  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:21.450028  527777 cri.go:89] found id: ""
	I1201 21:15:21.450041  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.450051  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:21.450057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:21.450115  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:21.476290  527777 cri.go:89] found id: ""
	I1201 21:15:21.476305  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.476312  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:21.476317  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:21.476378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:21.503570  527777 cri.go:89] found id: ""
	I1201 21:15:21.503591  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.503599  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:21.503607  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:21.503622  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:21.518970  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:21.518995  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:21.583522  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:21.583581  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:21.583592  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.662707  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:21.662730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:21.693467  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:21.693484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.268299  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:24.279383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:24.279455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:24.305720  527777 cri.go:89] found id: ""
	I1201 21:15:24.305733  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.305741  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:24.305746  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:24.305809  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:24.333862  527777 cri.go:89] found id: ""
	I1201 21:15:24.333878  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.333885  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:24.333891  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:24.333965  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:24.365916  527777 cri.go:89] found id: ""
	I1201 21:15:24.365931  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.365939  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:24.365948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:24.366009  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:24.393185  527777 cri.go:89] found id: ""
	I1201 21:15:24.393202  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.393209  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:24.393216  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:24.393279  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:24.419532  527777 cri.go:89] found id: ""
	I1201 21:15:24.419547  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.419554  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:24.419560  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:24.419629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:24.445529  527777 cri.go:89] found id: ""
	I1201 21:15:24.445543  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.445550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:24.445557  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:24.445619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:24.470988  527777 cri.go:89] found id: ""
	I1201 21:15:24.471002  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.471009  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:24.471017  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:24.471028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:24.500416  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:24.500433  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.566009  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:24.566028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:24.582350  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:24.582366  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:24.653085  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:24.653095  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:24.653106  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:27.239323  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:27.250432  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:27.250495  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:27.276796  527777 cri.go:89] found id: ""
	I1201 21:15:27.276824  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.276832  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:27.276837  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:27.276927  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:27.303592  527777 cri.go:89] found id: ""
	I1201 21:15:27.303607  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.303614  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:27.303620  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:27.303685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:27.330141  527777 cri.go:89] found id: ""
	I1201 21:15:27.330155  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.330163  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:27.330168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:27.330231  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:27.358477  527777 cri.go:89] found id: ""
	I1201 21:15:27.358491  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.358498  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:27.358503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:27.358570  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:27.384519  527777 cri.go:89] found id: ""
	I1201 21:15:27.384533  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.384541  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:27.384547  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:27.384610  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:27.410788  527777 cri.go:89] found id: ""
	I1201 21:15:27.410804  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.410811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:27.410817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:27.410880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:27.437727  527777 cri.go:89] found id: ""
	I1201 21:15:27.437742  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.437748  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:27.437756  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:27.437766  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:27.470359  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:27.470376  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:27.540219  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:27.540239  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:27.558165  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:27.558184  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:27.631990  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:27.632001  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:27.632013  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:30.214048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:30.225906  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:30.225977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:30.254528  527777 cri.go:89] found id: ""
	I1201 21:15:30.254544  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.254552  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:30.254559  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:30.254627  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:30.282356  527777 cri.go:89] found id: ""
	I1201 21:15:30.282371  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.282379  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:30.282385  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:30.282454  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:30.316244  527777 cri.go:89] found id: ""
	I1201 21:15:30.316266  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.316275  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:30.316281  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:30.316356  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:30.349310  527777 cri.go:89] found id: ""
	I1201 21:15:30.349324  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.349338  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:30.349345  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:30.349413  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:30.379233  527777 cri.go:89] found id: ""
	I1201 21:15:30.379259  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.379267  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:30.379273  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:30.379344  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:30.410578  527777 cri.go:89] found id: ""
	I1201 21:15:30.410592  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.410600  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:30.410607  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:30.410715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:30.439343  527777 cri.go:89] found id: ""
	I1201 21:15:30.439357  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.439365  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:30.439373  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:30.439383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:30.469722  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:30.469742  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:30.536977  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:30.536999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:30.552719  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:30.552738  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:30.625200  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:30.625210  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:30.625221  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.202525  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:33.213081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:33.213144  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:33.239684  527777 cri.go:89] found id: ""
	I1201 21:15:33.239699  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.239707  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:33.239713  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:33.239777  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:33.270046  527777 cri.go:89] found id: ""
	I1201 21:15:33.270060  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.270067  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:33.270073  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:33.270134  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:33.298615  527777 cri.go:89] found id: ""
	I1201 21:15:33.298631  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.298639  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:33.298646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:33.298715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:33.330389  527777 cri.go:89] found id: ""
	I1201 21:15:33.330403  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.330410  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:33.330416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:33.330472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:33.356054  527777 cri.go:89] found id: ""
	I1201 21:15:33.356068  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.356075  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:33.356081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:33.356147  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:33.385771  527777 cri.go:89] found id: ""
	I1201 21:15:33.385784  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.385792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:33.385797  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:33.385852  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:33.412562  527777 cri.go:89] found id: ""
	I1201 21:15:33.412580  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.412587  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:33.412601  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:33.412616  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:33.478848  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:33.478868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:33.494280  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:33.494296  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:33.574855  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:33.574866  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:33.574876  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.653087  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:33.653110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:36.198878  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:36.209291  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:36.209352  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:36.234666  527777 cri.go:89] found id: ""
	I1201 21:15:36.234679  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.234686  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:36.234691  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:36.234747  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:36.260740  527777 cri.go:89] found id: ""
	I1201 21:15:36.260754  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.260762  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:36.260767  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:36.260830  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:36.290674  527777 cri.go:89] found id: ""
	I1201 21:15:36.290688  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.290695  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:36.290700  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:36.290800  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:36.317381  527777 cri.go:89] found id: ""
	I1201 21:15:36.317396  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.317404  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:36.317410  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:36.317477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:36.346371  527777 cri.go:89] found id: ""
	I1201 21:15:36.346384  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.346391  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:36.346396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:36.346458  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:36.374545  527777 cri.go:89] found id: ""
	I1201 21:15:36.374559  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.374567  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:36.374573  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:36.374632  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:36.400298  527777 cri.go:89] found id: ""
	I1201 21:15:36.400324  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.400332  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:36.400339  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:36.400350  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:36.468826  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:36.468850  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:36.484335  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:36.484351  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:36.549841  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:36.549853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:36.549864  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:36.630562  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:36.630587  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:39.169136  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:39.182222  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:39.182296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:39.212188  527777 cri.go:89] found id: ""
	I1201 21:15:39.212202  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.212208  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:39.212213  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:39.212270  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:39.237215  527777 cri.go:89] found id: ""
	I1201 21:15:39.237229  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.237236  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:39.237241  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:39.237298  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:39.262205  527777 cri.go:89] found id: ""
	I1201 21:15:39.262219  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.262226  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:39.262232  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:39.262288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:39.290471  527777 cri.go:89] found id: ""
	I1201 21:15:39.290485  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.290492  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:39.290498  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:39.290559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:39.316212  527777 cri.go:89] found id: ""
	I1201 21:15:39.316238  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.316245  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:39.316251  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:39.316329  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:39.341014  527777 cri.go:89] found id: ""
	I1201 21:15:39.341037  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.341045  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:39.341051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:39.341109  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:39.375032  527777 cri.go:89] found id: ""
	I1201 21:15:39.375058  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.375067  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:39.375083  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:39.375093  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:39.447422  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:39.447444  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:39.462737  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:39.462754  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:39.534298  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:39.534310  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:39.534320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:39.611187  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:39.611208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.146214  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:42.159004  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:42.159073  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:42.195922  527777 cri.go:89] found id: ""
	I1201 21:15:42.195938  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.195946  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:42.195952  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:42.196022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:42.230178  527777 cri.go:89] found id: ""
	I1201 21:15:42.230193  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.230200  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:42.230206  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:42.230271  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:42.261082  527777 cri.go:89] found id: ""
	I1201 21:15:42.261098  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.261105  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:42.261111  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:42.261188  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:42.295345  527777 cri.go:89] found id: ""
	I1201 21:15:42.295361  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.295377  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:42.295383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:42.295457  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:42.330093  527777 cri.go:89] found id: ""
	I1201 21:15:42.330109  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.330116  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:42.330122  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:42.330186  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:42.358733  527777 cri.go:89] found id: ""
	I1201 21:15:42.358748  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.358756  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:42.358761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:42.358823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:42.388218  527777 cri.go:89] found id: ""
	I1201 21:15:42.388233  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.388240  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:42.388247  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:42.388258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:42.469165  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:42.469185  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.500328  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:42.500345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:42.569622  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:42.569642  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:42.585628  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:42.585645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:42.654077  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.155990  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:45.177587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:45.177664  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:45.216123  527777 cri.go:89] found id: ""
	I1201 21:15:45.216141  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.216149  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:45.216155  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:45.216241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:45.257016  527777 cri.go:89] found id: ""
	I1201 21:15:45.257036  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.257044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:45.257053  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:45.257139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:45.310072  527777 cri.go:89] found id: ""
	I1201 21:15:45.310087  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.310095  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:45.310101  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:45.310165  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:45.339040  527777 cri.go:89] found id: ""
	I1201 21:15:45.339054  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.339062  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:45.339068  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:45.339154  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:45.370340  527777 cri.go:89] found id: ""
	I1201 21:15:45.370354  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.370361  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:45.370366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:45.370426  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:45.396213  527777 cri.go:89] found id: ""
	I1201 21:15:45.396227  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.396234  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:45.396240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:45.396299  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:45.423726  527777 cri.go:89] found id: ""
	I1201 21:15:45.423745  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.423755  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:45.423773  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:45.423784  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:45.490150  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.490161  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:45.490172  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:45.565908  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:45.565926  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:45.598740  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:45.598755  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:45.666263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:45.666281  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.183348  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:48.193996  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:48.194068  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:48.221096  527777 cri.go:89] found id: ""
	I1201 21:15:48.221110  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.221117  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:48.221123  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:48.221180  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:48.247305  527777 cri.go:89] found id: ""
	I1201 21:15:48.247320  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.247328  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:48.247333  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:48.247392  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:48.277432  527777 cri.go:89] found id: ""
	I1201 21:15:48.277447  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.277453  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:48.277459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:48.277521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:48.304618  527777 cri.go:89] found id: ""
	I1201 21:15:48.304636  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.304643  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:48.304649  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:48.304712  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:48.331672  527777 cri.go:89] found id: ""
	I1201 21:15:48.331686  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.331694  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:48.331699  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:48.331757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:48.360554  527777 cri.go:89] found id: ""
	I1201 21:15:48.360569  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.360577  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:48.360583  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:48.360640  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:48.385002  527777 cri.go:89] found id: ""
	I1201 21:15:48.385016  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.385023  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:48.385032  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:48.385043  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:48.414019  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:48.414036  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:48.479945  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:48.479964  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.495187  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:48.495206  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:48.560181  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:48.560191  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:48.560203  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.136751  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:51.147836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:51.147914  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:51.178020  527777 cri.go:89] found id: ""
	I1201 21:15:51.178033  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.178041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:51.178046  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:51.178106  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:51.206023  527777 cri.go:89] found id: ""
	I1201 21:15:51.206036  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.206044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:51.206049  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:51.206150  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:51.236344  527777 cri.go:89] found id: ""
	I1201 21:15:51.236359  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.236366  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:51.236371  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:51.236434  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:51.262331  527777 cri.go:89] found id: ""
	I1201 21:15:51.262346  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.262353  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:51.262359  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:51.262419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:51.290923  527777 cri.go:89] found id: ""
	I1201 21:15:51.290936  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.290944  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:51.290949  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:51.291016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:51.318520  527777 cri.go:89] found id: ""
	I1201 21:15:51.318535  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.318542  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:51.318548  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:51.318607  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:51.345816  527777 cri.go:89] found id: ""
	I1201 21:15:51.345830  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.345837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:51.345845  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:51.345857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:51.361084  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:51.361100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:51.427299  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:51.427309  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:51.427320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.502906  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:51.502929  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:51.533675  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:51.533691  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.100640  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:54.111984  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:54.112047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:54.137333  527777 cri.go:89] found id: ""
	I1201 21:15:54.137347  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.137353  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:54.137360  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:54.137419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:54.166609  527777 cri.go:89] found id: ""
	I1201 21:15:54.166624  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.166635  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:54.166640  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:54.166705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:54.193412  527777 cri.go:89] found id: ""
	I1201 21:15:54.193434  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.193441  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:54.193447  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:54.193509  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:54.219156  527777 cri.go:89] found id: ""
	I1201 21:15:54.219171  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.219178  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:54.219184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:54.219241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:54.248184  527777 cri.go:89] found id: ""
	I1201 21:15:54.248197  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.248204  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:54.248210  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:54.248278  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:54.274909  527777 cri.go:89] found id: ""
	I1201 21:15:54.274923  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.274931  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:54.274936  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:54.275003  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:54.300114  527777 cri.go:89] found id: ""
	I1201 21:15:54.300128  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.300135  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:54.300143  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:54.300154  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.366293  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:54.366312  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:54.382194  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:54.382210  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:54.446526  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:54.446536  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:54.446548  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:54.525097  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:54.525120  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.056605  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:57.067114  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:57.067185  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:57.096913  527777 cri.go:89] found id: ""
	I1201 21:15:57.096926  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.096933  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:57.096939  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:57.096995  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:57.124785  527777 cri.go:89] found id: ""
	I1201 21:15:57.124799  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.124806  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:57.124812  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:57.124877  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:57.151613  527777 cri.go:89] found id: ""
	I1201 21:15:57.151628  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.151635  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:57.151640  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:57.151702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:57.181422  527777 cri.go:89] found id: ""
	I1201 21:15:57.181437  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.181445  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:57.181451  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:57.181510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:57.207775  527777 cri.go:89] found id: ""
	I1201 21:15:57.207789  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.207796  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:57.207801  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:57.207861  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:57.232906  527777 cri.go:89] found id: ""
	I1201 21:15:57.232931  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.232939  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:57.232945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:57.233016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:57.259075  527777 cri.go:89] found id: ""
	I1201 21:15:57.259100  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.259107  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:57.259115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:57.259126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.288148  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:57.288164  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:57.355525  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:57.355545  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:57.371229  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:57.371246  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:57.439767  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:57.439779  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:57.439791  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.016574  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:00.063670  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:00.063743  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:00.181922  527777 cri.go:89] found id: ""
	I1201 21:16:00.181939  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.181947  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:00.181954  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:00.183169  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:00.318653  527777 cri.go:89] found id: ""
	I1201 21:16:00.318668  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.318676  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:00.318682  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:00.318752  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:00.366365  527777 cri.go:89] found id: ""
	I1201 21:16:00.366381  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.366391  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:00.366398  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:00.366497  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:00.432333  527777 cri.go:89] found id: ""
	I1201 21:16:00.432349  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.432358  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:00.432364  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:00.432436  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:00.487199  527777 cri.go:89] found id: ""
	I1201 21:16:00.487216  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.487238  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:00.487244  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:00.487315  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:00.541398  527777 cri.go:89] found id: ""
	I1201 21:16:00.541429  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.541438  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:00.541444  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:00.541530  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:00.577064  527777 cri.go:89] found id: ""
	I1201 21:16:00.577082  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.577095  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:00.577103  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:00.577116  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:00.646395  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:00.646418  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:00.667724  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:00.667741  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:00.750849  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:00.750860  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:00.750872  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.828858  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:00.828881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.360481  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:03.371537  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:03.371611  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:03.401359  527777 cri.go:89] found id: ""
	I1201 21:16:03.401373  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.401380  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:03.401385  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:03.401452  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:03.428335  527777 cri.go:89] found id: ""
	I1201 21:16:03.428350  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.428358  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:03.428363  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:03.428424  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:03.460610  527777 cri.go:89] found id: ""
	I1201 21:16:03.460623  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.460630  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:03.460636  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:03.460695  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:03.489139  527777 cri.go:89] found id: ""
	I1201 21:16:03.489153  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.489161  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:03.489168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:03.489234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:03.519388  527777 cri.go:89] found id: ""
	I1201 21:16:03.519410  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.519418  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:03.519423  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:03.519490  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:03.549588  527777 cri.go:89] found id: ""
	I1201 21:16:03.549602  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.549610  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:03.549615  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:03.549678  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:03.576025  527777 cri.go:89] found id: ""
	I1201 21:16:03.576039  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.576047  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:03.576055  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:03.576066  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.605415  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:03.605431  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:03.675775  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:03.675797  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:03.691777  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:03.691793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:03.765238  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:03.765250  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:03.765263  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.346338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:06.356267  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:06.356325  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:06.380678  527777 cri.go:89] found id: ""
	I1201 21:16:06.380691  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.380717  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:06.380723  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:06.380780  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:06.410489  527777 cri.go:89] found id: ""
	I1201 21:16:06.410503  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.410518  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:06.410524  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:06.410588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:06.443231  527777 cri.go:89] found id: ""
	I1201 21:16:06.443250  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.443257  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:06.443263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:06.443334  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:06.468603  527777 cri.go:89] found id: ""
	I1201 21:16:06.468618  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.468625  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:06.468631  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:06.468700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:06.493128  527777 cri.go:89] found id: ""
	I1201 21:16:06.493141  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.493148  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:06.493154  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:06.493212  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:06.518860  527777 cri.go:89] found id: ""
	I1201 21:16:06.518874  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.518881  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:06.518886  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:06.518958  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:06.545817  527777 cri.go:89] found id: ""
	I1201 21:16:06.545831  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.545839  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:06.545846  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:06.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:06.610356  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:06.610378  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:06.625472  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:06.625488  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:06.722623  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:06.722633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:06.722648  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.798208  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:06.798228  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.328391  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:09.339639  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:09.339706  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:09.368398  527777 cri.go:89] found id: ""
	I1201 21:16:09.368421  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.368428  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:09.368434  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:09.368512  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:09.398525  527777 cri.go:89] found id: ""
	I1201 21:16:09.398540  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.398548  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:09.398553  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:09.398615  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:09.426105  527777 cri.go:89] found id: ""
	I1201 21:16:09.426121  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.426129  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:09.426145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:09.426205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:09.456433  527777 cri.go:89] found id: ""
	I1201 21:16:09.456449  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.456456  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:09.456462  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:09.456525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:09.488473  527777 cri.go:89] found id: ""
	I1201 21:16:09.488488  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.488495  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:09.488503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:09.488563  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:09.514937  527777 cri.go:89] found id: ""
	I1201 21:16:09.514951  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.514958  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:09.514964  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:09.515027  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:09.545815  527777 cri.go:89] found id: ""
	I1201 21:16:09.545829  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.545837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:09.545845  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:09.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.575097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:09.575115  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:09.642216  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:09.642237  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:09.663629  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:09.663645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:09.745863  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:09.745876  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:09.745888  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.327853  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:12.338928  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:12.338992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:12.372550  527777 cri.go:89] found id: ""
	I1201 21:16:12.372583  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.372591  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:12.372597  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:12.372662  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:12.402760  527777 cri.go:89] found id: ""
	I1201 21:16:12.402776  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.402784  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:12.402790  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:12.402851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:12.429193  527777 cri.go:89] found id: ""
	I1201 21:16:12.429208  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.429215  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:12.429221  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:12.429286  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:12.456952  527777 cri.go:89] found id: ""
	I1201 21:16:12.456966  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.456973  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:12.456978  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:12.457037  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:12.483859  527777 cri.go:89] found id: ""
	I1201 21:16:12.483874  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.483881  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:12.483887  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:12.483950  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:12.510218  527777 cri.go:89] found id: ""
	I1201 21:16:12.510234  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.510242  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:12.510248  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:12.510323  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:12.536841  527777 cri.go:89] found id: ""
	I1201 21:16:12.536856  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.536864  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:12.536871  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:12.536881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.612682  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:12.612702  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:12.641218  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:12.641235  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:12.719908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:12.719930  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:12.736058  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:12.736077  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:12.803643  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.304417  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:15.314647  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:15.314707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:15.342468  527777 cri.go:89] found id: ""
	I1201 21:16:15.342483  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.342491  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:15.342497  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:15.342559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:15.369048  527777 cri.go:89] found id: ""
	I1201 21:16:15.369063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.369071  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:15.369077  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:15.369140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:15.393869  527777 cri.go:89] found id: ""
	I1201 21:16:15.393884  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.393891  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:15.393897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:15.393960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:15.420049  527777 cri.go:89] found id: ""
	I1201 21:16:15.420063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.420071  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:15.420077  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:15.420136  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:15.450112  527777 cri.go:89] found id: ""
	I1201 21:16:15.450126  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.450134  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:15.450140  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:15.450201  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:15.475788  527777 cri.go:89] found id: ""
	I1201 21:16:15.475803  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.475811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:15.475884  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:15.502058  527777 cri.go:89] found id: ""
	I1201 21:16:15.502072  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.502084  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:15.502092  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:15.502102  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:15.535936  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:15.535953  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:15.601548  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:15.601568  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:15.617150  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:15.617167  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:15.694491  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.694502  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:15.694514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.282089  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:18.292620  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:18.292687  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:18.320483  527777 cri.go:89] found id: ""
	I1201 21:16:18.320497  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.320504  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:18.320510  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:18.320569  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:18.346376  527777 cri.go:89] found id: ""
	I1201 21:16:18.346389  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.346397  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:18.346402  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:18.346459  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:18.377534  527777 cri.go:89] found id: ""
	I1201 21:16:18.377549  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.377557  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:18.377562  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:18.377619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:18.402867  527777 cri.go:89] found id: ""
	I1201 21:16:18.402882  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.402892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:18.402897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:18.402952  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:18.429104  527777 cri.go:89] found id: ""
	I1201 21:16:18.429119  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.429126  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:18.429132  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:18.429193  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:18.455237  527777 cri.go:89] found id: ""
	I1201 21:16:18.455251  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.455257  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:18.455263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:18.455330  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:18.480176  527777 cri.go:89] found id: ""
	I1201 21:16:18.480190  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.480197  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:18.480205  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:18.480215  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.554692  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:18.554713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:18.586044  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:18.586062  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:18.654056  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:18.654076  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:18.670115  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:18.670131  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:18.739729  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.240925  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:21.251332  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:21.251400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:21.277213  527777 cri.go:89] found id: ""
	I1201 21:16:21.277228  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.277266  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:21.277275  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:21.277349  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:21.304294  527777 cri.go:89] found id: ""
	I1201 21:16:21.304308  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.304316  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:21.304321  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:21.304393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:21.331354  527777 cri.go:89] found id: ""
	I1201 21:16:21.331369  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.331377  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:21.331382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:21.331455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:21.358548  527777 cri.go:89] found id: ""
	I1201 21:16:21.358563  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.358571  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:21.358577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:21.358637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:21.384228  527777 cri.go:89] found id: ""
	I1201 21:16:21.384242  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.384250  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:21.384255  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:21.384321  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:21.413560  527777 cri.go:89] found id: ""
	I1201 21:16:21.413574  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.413581  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:21.413587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:21.413647  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:21.439790  527777 cri.go:89] found id: ""
	I1201 21:16:21.439805  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.439813  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:21.439821  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:21.439839  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:21.505587  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:21.505607  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:21.522038  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:21.522064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:21.590692  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.590718  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:21.590730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:21.667703  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:21.667727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.203209  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:24.214159  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:24.214230  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:24.242378  527777 cri.go:89] found id: ""
	I1201 21:16:24.242392  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.242399  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:24.242405  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:24.242486  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:24.269017  527777 cri.go:89] found id: ""
	I1201 21:16:24.269032  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.269039  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:24.269045  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:24.269103  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:24.295927  527777 cri.go:89] found id: ""
	I1201 21:16:24.295942  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.295949  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:24.295955  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:24.296019  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:24.321917  527777 cri.go:89] found id: ""
	I1201 21:16:24.321932  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.321939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:24.321944  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:24.322012  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:24.350147  527777 cri.go:89] found id: ""
	I1201 21:16:24.350163  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.350171  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:24.350177  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:24.350250  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:24.376131  527777 cri.go:89] found id: ""
	I1201 21:16:24.376145  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.376153  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:24.376160  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:24.376220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:24.403024  527777 cri.go:89] found id: ""
	I1201 21:16:24.403039  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.403046  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:24.403055  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:24.403068  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:24.418212  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:24.418230  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:24.486448  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:24.486460  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:24.486472  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:24.563285  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:24.563307  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.597003  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:24.597023  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.167466  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:27.179061  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:27.179139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:27.210380  527777 cri.go:89] found id: ""
	I1201 21:16:27.210394  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.210402  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:27.210409  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:27.210474  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:27.238732  527777 cri.go:89] found id: ""
	I1201 21:16:27.238747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.238754  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:27.238760  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:27.238827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:27.265636  527777 cri.go:89] found id: ""
	I1201 21:16:27.265652  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.265661  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:27.265667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:27.265736  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:27.292213  527777 cri.go:89] found id: ""
	I1201 21:16:27.292228  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.292235  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:27.292241  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:27.292300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:27.324732  527777 cri.go:89] found id: ""
	I1201 21:16:27.324747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.324755  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:27.324762  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:27.324827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:27.352484  527777 cri.go:89] found id: ""
	I1201 21:16:27.352499  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.352507  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:27.352513  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:27.352590  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:27.384113  527777 cri.go:89] found id: ""
	I1201 21:16:27.384128  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.384136  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:27.384144  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:27.384155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:27.415615  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:27.415634  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.482296  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:27.482319  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:27.498829  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:27.498846  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:27.569732  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:27.569744  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:27.569757  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.145371  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:30.156840  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:30.156922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:30.184704  527777 cri.go:89] found id: ""
	I1201 21:16:30.184719  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.184727  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:30.184733  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:30.184795  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:30.213086  527777 cri.go:89] found id: ""
	I1201 21:16:30.213110  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.213120  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:30.213125  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:30.213192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:30.245472  527777 cri.go:89] found id: ""
	I1201 21:16:30.245486  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.245494  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:30.245499  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:30.245565  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:30.273463  527777 cri.go:89] found id: ""
	I1201 21:16:30.273477  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.273485  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:30.273491  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:30.273557  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:30.302141  527777 cri.go:89] found id: ""
	I1201 21:16:30.302156  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.302164  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:30.302170  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:30.302232  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:30.329744  527777 cri.go:89] found id: ""
	I1201 21:16:30.329758  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.329765  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:30.329771  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:30.329833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:30.356049  527777 cri.go:89] found id: ""
	I1201 21:16:30.356063  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.356071  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:30.356079  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:30.356110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:30.424124  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:30.424134  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:30.424145  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.498989  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:30.499009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:30.536189  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:30.536208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:30.601111  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:30.601130  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.116248  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:33.129790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:33.129876  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:33.162072  527777 cri.go:89] found id: ""
	I1201 21:16:33.162085  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.162093  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:33.162098  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:33.162168  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:33.188853  527777 cri.go:89] found id: ""
	I1201 21:16:33.188868  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.188875  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:33.188881  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:33.188944  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:33.215527  527777 cri.go:89] found id: ""
	I1201 21:16:33.215541  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.215548  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:33.215554  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:33.215613  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:33.241336  527777 cri.go:89] found id: ""
	I1201 21:16:33.241350  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.241357  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:33.241363  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:33.241422  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:33.267551  527777 cri.go:89] found id: ""
	I1201 21:16:33.267564  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.267571  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:33.267576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:33.267639  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:33.293257  527777 cri.go:89] found id: ""
	I1201 21:16:33.293273  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.293280  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:33.293286  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:33.293346  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:33.324702  527777 cri.go:89] found id: ""
	I1201 21:16:33.324717  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.324725  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:33.324733  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:33.324745  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:33.393448  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:33.393473  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.409048  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:33.409075  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:33.473709  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:33.473720  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:33.473731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:33.549174  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:33.549194  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:36.083124  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:36.093860  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:36.093919  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:36.122911  527777 cri.go:89] found id: ""
	I1201 21:16:36.122925  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.122932  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:36.122938  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:36.123000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:36.148002  527777 cri.go:89] found id: ""
	I1201 21:16:36.148016  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.148023  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:36.148028  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:36.148088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:36.173008  527777 cri.go:89] found id: ""
	I1201 21:16:36.173022  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.173029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:36.173034  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:36.173092  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:36.198828  527777 cri.go:89] found id: ""
	I1201 21:16:36.198841  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.198848  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:36.198854  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:36.198909  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:36.224001  527777 cri.go:89] found id: ""
	I1201 21:16:36.224015  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.224022  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:36.224027  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:36.224085  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:36.249054  527777 cri.go:89] found id: ""
	I1201 21:16:36.249068  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.249075  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:36.249080  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:36.249140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:36.273000  527777 cri.go:89] found id: ""
	I1201 21:16:36.273014  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.273021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:36.273029  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:36.273039  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:36.337502  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:36.337521  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:36.353315  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:36.353331  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:36.424612  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:36.424623  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:36.424633  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:36.503070  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:36.503100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:39.034568  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:39.045696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:39.045760  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:39.071542  527777 cri.go:89] found id: ""
	I1201 21:16:39.071555  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.071563  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:39.071569  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:39.071630  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:39.102301  527777 cri.go:89] found id: ""
	I1201 21:16:39.102315  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.102322  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:39.102328  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:39.102384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:39.129808  527777 cri.go:89] found id: ""
	I1201 21:16:39.129823  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.129830  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:39.129836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:39.129895  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:39.155555  527777 cri.go:89] found id: ""
	I1201 21:16:39.155569  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.155576  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:39.155582  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:39.155650  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:39.186394  527777 cri.go:89] found id: ""
	I1201 21:16:39.186408  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.186415  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:39.186420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:39.186485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:39.213875  527777 cri.go:89] found id: ""
	I1201 21:16:39.213889  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.213896  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:39.213901  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:39.213957  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:39.243609  527777 cri.go:89] found id: ""
	I1201 21:16:39.243623  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.243631  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:39.243640  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:39.243652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:39.307878  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:39.307897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:39.322972  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:39.322989  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:39.391843  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:39.391853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:39.391869  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:39.471894  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:39.471915  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.007008  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:42.029520  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:42.029588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:42.057505  527777 cri.go:89] found id: ""
	I1201 21:16:42.057520  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.057528  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:42.057534  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:42.057598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:42.097060  527777 cri.go:89] found id: ""
	I1201 21:16:42.097086  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.097094  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:42.097100  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:42.097191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:42.136029  527777 cri.go:89] found id: ""
	I1201 21:16:42.136048  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.136058  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:42.136064  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:42.136155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:42.183711  527777 cri.go:89] found id: ""
	I1201 21:16:42.183733  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.183743  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:42.183750  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:42.183825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:42.219282  527777 cri.go:89] found id: ""
	I1201 21:16:42.219298  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.219320  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:42.219326  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:42.219393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:42.248969  527777 cri.go:89] found id: ""
	I1201 21:16:42.248986  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.248994  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:42.249005  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:42.249079  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:42.283438  527777 cri.go:89] found id: ""
	I1201 21:16:42.283452  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.283459  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:42.283467  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:42.283479  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:42.355657  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:42.355675  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:42.355686  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:42.432138  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:42.432158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.466460  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:42.466475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:42.532633  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:42.532653  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.050487  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:45.077310  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:45.077404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:45.125431  527777 cri.go:89] found id: ""
	I1201 21:16:45.125455  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.125463  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:45.125469  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:45.125541  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:45.159113  527777 cri.go:89] found id: ""
	I1201 21:16:45.159151  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.159161  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:45.159167  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:45.159238  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:45.205059  527777 cri.go:89] found id: ""
	I1201 21:16:45.205075  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.205084  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:45.205092  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:45.205213  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:45.256952  527777 cri.go:89] found id: ""
	I1201 21:16:45.257035  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.257044  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:45.257051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:45.257244  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:45.299953  527777 cri.go:89] found id: ""
	I1201 21:16:45.299967  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.299975  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:45.299981  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:45.300047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:45.334546  527777 cri.go:89] found id: ""
	I1201 21:16:45.334562  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.334570  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:45.334576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:45.334641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:45.366922  527777 cri.go:89] found id: ""
	I1201 21:16:45.366936  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.366944  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:45.366952  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:45.366973  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.384985  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:45.385003  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:45.455424  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:45.455434  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:45.455446  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:45.532668  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:45.532689  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:45.572075  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:45.572092  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.147493  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:48.158252  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:48.158331  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:48.185671  527777 cri.go:89] found id: ""
	I1201 21:16:48.185685  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.185692  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:48.185697  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:48.185766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:48.211977  527777 cri.go:89] found id: ""
	I1201 21:16:48.211991  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.211998  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:48.212003  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:48.212059  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:48.238605  527777 cri.go:89] found id: ""
	I1201 21:16:48.238620  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.238627  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:48.238632  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:48.238691  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:48.272407  527777 cri.go:89] found id: ""
	I1201 21:16:48.272421  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.272428  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:48.272433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:48.272491  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:48.300451  527777 cri.go:89] found id: ""
	I1201 21:16:48.300465  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.300472  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:48.300478  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:48.300543  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:48.326518  527777 cri.go:89] found id: ""
	I1201 21:16:48.326542  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.326550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:48.326555  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:48.326629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:48.353027  527777 cri.go:89] found id: ""
	I1201 21:16:48.353043  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.353050  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:48.353059  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:48.353070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.418908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:48.418928  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:48.435338  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:48.435358  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:48.502670  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:48.502708  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:48.502718  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:48.579198  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:48.579219  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.111632  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:51.122895  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:51.122970  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:51.149845  527777 cri.go:89] found id: ""
	I1201 21:16:51.149859  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.149867  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:51.149872  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:51.149937  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:51.182385  527777 cri.go:89] found id: ""
	I1201 21:16:51.182399  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.182406  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:51.182411  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:51.182473  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:51.207954  527777 cri.go:89] found id: ""
	I1201 21:16:51.207967  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.208015  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:51.208024  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:51.208080  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:51.233058  527777 cri.go:89] found id: ""
	I1201 21:16:51.233071  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.233077  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:51.233083  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:51.233146  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:51.259105  527777 cri.go:89] found id: ""
	I1201 21:16:51.259119  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.259127  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:51.259147  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:51.259205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:51.284870  527777 cri.go:89] found id: ""
	I1201 21:16:51.284884  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.284891  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:51.284896  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:51.284953  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:51.312084  527777 cri.go:89] found id: ""
	I1201 21:16:51.312099  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.312106  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:51.312115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:51.312126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.342115  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:51.342134  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:51.408816  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:51.408836  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:51.425032  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:51.425054  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:51.494088  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:51.494097  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:51.494107  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.070393  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:54.082393  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:54.082464  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:54.112007  527777 cri.go:89] found id: ""
	I1201 21:16:54.112033  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.112041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:54.112048  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:54.112120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:54.142629  527777 cri.go:89] found id: ""
	I1201 21:16:54.142643  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.142650  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:54.142656  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:54.142715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:54.170596  527777 cri.go:89] found id: ""
	I1201 21:16:54.170611  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.170618  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:54.170623  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:54.170685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:54.199276  527777 cri.go:89] found id: ""
	I1201 21:16:54.199301  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.199309  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:54.199314  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:54.199385  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:54.229268  527777 cri.go:89] found id: ""
	I1201 21:16:54.229285  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.229294  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:54.229300  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:54.229378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:54.261273  527777 cri.go:89] found id: ""
	I1201 21:16:54.261289  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.261298  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:54.261306  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:54.261409  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:54.289154  527777 cri.go:89] found id: ""
	I1201 21:16:54.289169  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.289189  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:54.289199  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:54.289216  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:54.363048  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:54.363059  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:54.363070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.440875  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:54.440897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:54.471338  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:54.471355  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:54.543810  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:54.543830  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.061388  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:57.071929  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:57.071998  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:57.102516  527777 cri.go:89] found id: ""
	I1201 21:16:57.102531  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.102540  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:57.102546  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:57.102614  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:57.129734  527777 cri.go:89] found id: ""
	I1201 21:16:57.129749  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.129756  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:57.129761  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:57.129825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:57.160948  527777 cri.go:89] found id: ""
	I1201 21:16:57.160962  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.160971  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:57.160977  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:57.161049  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:57.192059  527777 cri.go:89] found id: ""
	I1201 21:16:57.192075  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.192082  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:57.192088  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:57.192155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:57.217906  527777 cri.go:89] found id: ""
	I1201 21:16:57.217920  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.217927  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:57.217932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:57.217992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:57.246391  527777 cri.go:89] found id: ""
	I1201 21:16:57.246406  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.246414  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:57.246420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:57.246480  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:57.273534  527777 cri.go:89] found id: ""
	I1201 21:16:57.273558  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.273565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:57.273573  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:57.273585  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:57.338589  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:57.338609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.354225  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:57.354241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:57.425192  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:57.425202  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:57.425213  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:57.501690  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:57.501713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:00.031846  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:00.071974  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:00.072071  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:00.158888  527777 cri.go:89] found id: ""
	I1201 21:17:00.158904  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.158912  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:00.158918  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:00.158994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:00.267283  527777 cri.go:89] found id: ""
	I1201 21:17:00.267299  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.267306  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:00.267312  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:00.267395  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:00.331710  527777 cri.go:89] found id: ""
	I1201 21:17:00.331725  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.331733  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:00.331740  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:00.331821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:00.416435  527777 cri.go:89] found id: ""
	I1201 21:17:00.416468  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.416476  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:00.416482  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:00.416566  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:00.456878  527777 cri.go:89] found id: ""
	I1201 21:17:00.456894  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.456904  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:00.456909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:00.456979  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:00.511096  527777 cri.go:89] found id: ""
	I1201 21:17:00.511113  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.511122  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:00.511166  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:00.511245  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:00.565444  527777 cri.go:89] found id: ""
	I1201 21:17:00.565463  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.565471  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:00.565480  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:00.565498  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:00.641086  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:00.641121  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:00.662045  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:00.662064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:00.750234  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:00.750246  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:00.750258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:00.828511  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:00.828539  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:03.366405  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:03.379053  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:03.379127  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:03.412977  527777 cri.go:89] found id: ""
	I1201 21:17:03.412991  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.412999  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:03.413005  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:03.413074  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:03.442789  527777 cri.go:89] found id: ""
	I1201 21:17:03.442817  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.442827  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:03.442834  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:03.442956  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:03.472731  527777 cri.go:89] found id: ""
	I1201 21:17:03.472758  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.472767  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:03.472772  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:03.472843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:03.503719  527777 cri.go:89] found id: ""
	I1201 21:17:03.503735  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.503744  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:03.503751  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:03.503823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:03.533642  527777 cri.go:89] found id: ""
	I1201 21:17:03.533658  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.533665  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:03.533671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:03.533749  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:03.562889  527777 cri.go:89] found id: ""
	I1201 21:17:03.562908  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.562916  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:03.562922  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:03.563006  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:03.592257  527777 cri.go:89] found id: ""
	I1201 21:17:03.592275  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.592283  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:03.592291  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:03.592303  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:03.660263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:03.660282  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:03.683357  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:03.683375  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:03.765695  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:03.765707  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:03.765719  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:03.842543  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:03.842567  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.376185  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:06.387932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:06.388000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:06.417036  527777 cri.go:89] found id: ""
	I1201 21:17:06.417050  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.417058  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:06.417064  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:06.417125  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:06.447064  527777 cri.go:89] found id: ""
	I1201 21:17:06.447090  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.447098  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:06.447104  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:06.447207  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:06.476879  527777 cri.go:89] found id: ""
	I1201 21:17:06.476893  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.476900  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:06.476905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:06.476968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:06.506320  527777 cri.go:89] found id: ""
	I1201 21:17:06.506338  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.506346  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:06.506352  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:06.506419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:06.535420  527777 cri.go:89] found id: ""
	I1201 21:17:06.535443  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.535451  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:06.535458  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:06.535525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:06.563751  527777 cri.go:89] found id: ""
	I1201 21:17:06.563784  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.563792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:06.563798  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:06.563865  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:06.597779  527777 cri.go:89] found id: ""
	I1201 21:17:06.597795  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.597803  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:06.597811  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:06.597823  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:06.681458  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:06.681470  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:06.681482  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:06.778343  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:06.778369  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.812835  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:06.812854  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:06.886097  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:06.886123  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.404611  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:09.415307  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:09.415386  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:09.454145  527777 cri.go:89] found id: ""
	I1201 21:17:09.454159  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.454168  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:09.454174  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:09.454240  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:09.483869  527777 cri.go:89] found id: ""
	I1201 21:17:09.483885  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.483893  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:09.483899  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:09.483961  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:09.510637  527777 cri.go:89] found id: ""
	I1201 21:17:09.510650  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.510657  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:09.510662  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:09.510719  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:09.542823  527777 cri.go:89] found id: ""
	I1201 21:17:09.542837  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.542844  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:09.542849  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:09.542911  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:09.570165  527777 cri.go:89] found id: ""
	I1201 21:17:09.570184  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.570191  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:09.570196  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:09.570254  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:09.595630  527777 cri.go:89] found id: ""
	I1201 21:17:09.595645  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.595652  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:09.595658  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:09.595722  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:09.621205  527777 cri.go:89] found id: ""
	I1201 21:17:09.621219  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.621226  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:09.621234  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:09.621244  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:09.700160  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:09.700182  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:09.739401  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:09.739425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:09.809572  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:09.809594  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.828869  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:09.828886  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:09.920701  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.421012  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:12.432213  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:12.432287  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:12.459734  527777 cri.go:89] found id: ""
	I1201 21:17:12.459757  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.459765  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:12.459771  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:12.459840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:12.485671  527777 cri.go:89] found id: ""
	I1201 21:17:12.485685  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.485692  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:12.485698  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:12.485757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:12.511548  527777 cri.go:89] found id: ""
	I1201 21:17:12.511564  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.511572  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:12.511577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:12.511637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:12.542030  527777 cri.go:89] found id: ""
	I1201 21:17:12.542046  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.542053  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:12.542060  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:12.542120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:12.567661  527777 cri.go:89] found id: ""
	I1201 21:17:12.567675  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.567691  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:12.567696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:12.567766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:12.597625  527777 cri.go:89] found id: ""
	I1201 21:17:12.597640  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.597647  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:12.597653  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:12.597718  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:12.623694  527777 cri.go:89] found id: ""
	I1201 21:17:12.623708  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.623715  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:12.623722  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:12.623733  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:12.638757  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:12.638772  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:12.731591  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.731601  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:12.731612  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:12.808720  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:12.808739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:12.838448  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:12.838465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:15.411670  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:15.422227  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:15.422288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:15.449244  527777 cri.go:89] found id: ""
	I1201 21:17:15.449267  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.449275  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:15.449281  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:15.449351  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:15.475790  527777 cri.go:89] found id: ""
	I1201 21:17:15.475804  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.475812  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:15.475883  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:15.505030  527777 cri.go:89] found id: ""
	I1201 21:17:15.505044  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.505052  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:15.505057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:15.505121  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:15.535702  527777 cri.go:89] found id: ""
	I1201 21:17:15.535717  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.535726  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:15.535732  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:15.535802  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:15.561881  527777 cri.go:89] found id: ""
	I1201 21:17:15.561895  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.561903  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:15.561909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:15.561968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:15.589608  527777 cri.go:89] found id: ""
	I1201 21:17:15.589623  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.589631  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:15.589637  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:15.589704  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:15.617545  527777 cri.go:89] found id: ""
	I1201 21:17:15.617559  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.617565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:15.617573  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:15.617584  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:15.633049  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:15.633067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:15.719603  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:15.719617  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:15.719628  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:15.795783  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:15.795806  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:15.829611  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:15.829629  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.397343  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:18.407645  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:18.407707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:18.431992  527777 cri.go:89] found id: ""
	I1201 21:17:18.432013  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.432020  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:18.432025  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:18.432082  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:18.456900  527777 cri.go:89] found id: ""
	I1201 21:17:18.456914  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.456921  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:18.456927  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:18.456985  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:18.482130  527777 cri.go:89] found id: ""
	I1201 21:17:18.482144  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.482151  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:18.482156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:18.482216  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:18.506788  527777 cri.go:89] found id: ""
	I1201 21:17:18.506802  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.506809  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:18.506814  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:18.506880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:18.535015  527777 cri.go:89] found id: ""
	I1201 21:17:18.535029  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.535036  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:18.535041  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:18.535102  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:18.561266  527777 cri.go:89] found id: ""
	I1201 21:17:18.561281  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.561288  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:18.561294  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:18.561350  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:18.590006  527777 cri.go:89] found id: ""
	I1201 21:17:18.590020  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.590027  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:18.590034  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:18.590044  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.655626  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:18.655644  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:18.673142  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:18.673158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:18.755072  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:18.755084  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:18.755097  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:18.830997  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:18.831019  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:21.361828  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:21.372633  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:21.372693  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:21.397967  527777 cri.go:89] found id: ""
	I1201 21:17:21.397981  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.398009  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:21.398014  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:21.398083  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:21.424540  527777 cri.go:89] found id: ""
	I1201 21:17:21.424554  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.424570  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:21.424575  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:21.424644  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:21.450905  527777 cri.go:89] found id: ""
	I1201 21:17:21.450920  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.450948  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:21.450954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:21.451029  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:21.483885  527777 cri.go:89] found id: ""
	I1201 21:17:21.483899  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.483906  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:21.483911  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:21.483966  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:21.514135  527777 cri.go:89] found id: ""
	I1201 21:17:21.514149  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.514156  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:21.514162  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:21.514221  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:21.540203  527777 cri.go:89] found id: ""
	I1201 21:17:21.540217  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.540224  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:21.540229  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:21.540285  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:21.570752  527777 cri.go:89] found id: ""
	I1201 21:17:21.570765  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.570772  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:21.570780  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:21.570794  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:21.636631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:21.636651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:21.652498  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:21.652516  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:21.739586  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:21.739597  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:21.739609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:21.815773  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:21.815793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:24.351500  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:24.361669  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:24.361728  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:24.390941  527777 cri.go:89] found id: ""
	I1201 21:17:24.390955  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.390962  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:24.390968  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:24.391024  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:24.416426  527777 cri.go:89] found id: ""
	I1201 21:17:24.416440  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.416448  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:24.416453  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:24.416510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:24.443044  527777 cri.go:89] found id: ""
	I1201 21:17:24.443058  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.443065  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:24.443070  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:24.443182  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:24.468754  527777 cri.go:89] found id: ""
	I1201 21:17:24.468769  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.468776  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:24.468781  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:24.468840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:24.494385  527777 cri.go:89] found id: ""
	I1201 21:17:24.494399  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.494406  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:24.494416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:24.494477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:24.519676  527777 cri.go:89] found id: ""
	I1201 21:17:24.519689  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.519696  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:24.519702  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:24.519761  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:24.546000  527777 cri.go:89] found id: ""
	I1201 21:17:24.546014  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.546021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:24.546028  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:24.546041  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:24.611509  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:24.611529  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:24.626295  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:24.626324  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:24.702708  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:24.702719  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:24.702731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:24.784492  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:24.784514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.320817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:27.331542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:27.331602  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:27.357014  527777 cri.go:89] found id: ""
	I1201 21:17:27.357028  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.357035  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:27.357040  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:27.357098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:27.381792  527777 cri.go:89] found id: ""
	I1201 21:17:27.381806  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.381813  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:27.381818  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:27.381880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:27.407905  527777 cri.go:89] found id: ""
	I1201 21:17:27.407919  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.407927  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:27.407933  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:27.407994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:27.433511  527777 cri.go:89] found id: ""
	I1201 21:17:27.433526  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.433533  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:27.433539  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:27.433596  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:27.459609  527777 cri.go:89] found id: ""
	I1201 21:17:27.459622  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.459629  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:27.459635  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:27.459700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:27.487173  527777 cri.go:89] found id: ""
	I1201 21:17:27.487186  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.487193  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:27.487199  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:27.487257  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:27.512860  527777 cri.go:89] found id: ""
	I1201 21:17:27.512874  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.512881  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:27.512889  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:27.512901  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.541723  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:27.541739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:27.606990  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:27.607009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:27.622689  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:27.622705  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:27.700563  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:27.700573  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:27.700586  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.289250  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:30.300157  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:30.300217  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:30.327373  527777 cri.go:89] found id: ""
	I1201 21:17:30.327394  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.327405  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:30.327420  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:30.327492  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:30.353615  527777 cri.go:89] found id: ""
	I1201 21:17:30.353629  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.353636  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:30.353642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:30.353702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:30.385214  527777 cri.go:89] found id: ""
	I1201 21:17:30.385228  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.385235  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:30.385240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:30.385300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:30.415674  527777 cri.go:89] found id: ""
	I1201 21:17:30.415688  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.415695  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:30.415701  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:30.415767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:30.442641  527777 cri.go:89] found id: ""
	I1201 21:17:30.442656  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.442663  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:30.442668  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:30.442726  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:30.469997  527777 cri.go:89] found id: ""
	I1201 21:17:30.470010  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.470017  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:30.470023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:30.470081  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:30.495554  527777 cri.go:89] found id: ""
	I1201 21:17:30.495570  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.495579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:30.495587  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:30.495599  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:30.559878  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:30.559888  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:30.559899  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.635560  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:30.635581  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:30.673666  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:30.673682  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:30.747787  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:30.747808  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.264623  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:33.276366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:33.276427  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:33.306447  527777 cri.go:89] found id: ""
	I1201 21:17:33.306461  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.306473  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:33.306478  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:33.306538  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:33.334715  527777 cri.go:89] found id: ""
	I1201 21:17:33.334730  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.334738  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:33.334744  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:33.334814  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:33.365674  527777 cri.go:89] found id: ""
	I1201 21:17:33.365690  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.365698  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:33.365705  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:33.365774  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:33.396072  527777 cri.go:89] found id: ""
	I1201 21:17:33.396089  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.396096  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:33.396103  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:33.396175  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:33.429356  527777 cri.go:89] found id: ""
	I1201 21:17:33.429372  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.429381  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:33.429387  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:33.429461  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:33.457917  527777 cri.go:89] found id: ""
	I1201 21:17:33.457932  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.457941  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:33.457948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:33.458022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:33.490167  527777 cri.go:89] found id: ""
	I1201 21:17:33.490182  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.490190  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:33.490199  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:33.490212  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:33.558131  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:33.558155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.575080  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:33.575101  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:33.657808  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:33.657834  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:33.657848  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:33.754296  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:33.754323  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:36.289647  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:36.300774  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:36.300833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:36.327492  527777 cri.go:89] found id: ""
	I1201 21:17:36.327507  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.327514  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:36.327520  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:36.327583  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:36.359515  527777 cri.go:89] found id: ""
	I1201 21:17:36.359529  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.359537  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:36.359542  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:36.359606  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:36.387977  527777 cri.go:89] found id: ""
	I1201 21:17:36.387990  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.387997  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:36.388002  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:36.388058  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:36.413410  527777 cri.go:89] found id: ""
	I1201 21:17:36.413429  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.413436  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:36.413442  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:36.413499  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:36.440588  527777 cri.go:89] found id: ""
	I1201 21:17:36.440614  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.440622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:36.440627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:36.440698  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:36.471404  527777 cri.go:89] found id: ""
	I1201 21:17:36.471419  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.471427  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:36.471433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:36.471500  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:36.499502  527777 cri.go:89] found id: ""
	I1201 21:17:36.499518  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.499528  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:36.499536  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:36.499546  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:36.568027  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:36.568052  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:36.584561  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:36.584580  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:36.665718  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:36.665728  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:36.665740  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:36.748791  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:36.748812  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.285189  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:39.296369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:39.296438  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:39.323280  527777 cri.go:89] found id: ""
	I1201 21:17:39.323294  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.323306  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:39.323312  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:39.323379  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:39.352092  527777 cri.go:89] found id: ""
	I1201 21:17:39.352107  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.352115  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:39.352120  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:39.352187  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:39.379352  527777 cri.go:89] found id: ""
	I1201 21:17:39.379367  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.379375  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:39.379382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:39.379446  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:39.406925  527777 cri.go:89] found id: ""
	I1201 21:17:39.406940  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.406947  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:39.406954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:39.407022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:39.434427  527777 cri.go:89] found id: ""
	I1201 21:17:39.434442  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.434450  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:39.434455  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:39.434521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:39.466725  527777 cri.go:89] found id: ""
	I1201 21:17:39.466741  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.466748  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:39.466755  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:39.466821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:39.494952  527777 cri.go:89] found id: ""
	I1201 21:17:39.494968  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.494976  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:39.494985  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:39.494998  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:39.510984  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:39.511002  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:39.585968  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:39.585981  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:39.585993  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:39.669009  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:39.669033  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.705170  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:39.705189  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:42.275450  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:42.287572  527777 kubeadm.go:602] duration metric: took 4m1.888207918s to restartPrimaryControlPlane
	W1201 21:17:42.287658  527777 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1201 21:17:42.287747  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:17:42.711674  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:17:42.725511  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:17:42.734239  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:17:42.734308  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:17:42.743050  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:17:42.743060  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:17:42.743120  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:17:42.751678  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:17:42.751731  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:17:42.759481  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:17:42.767903  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:17:42.767964  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:17:42.776067  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.784283  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:17:42.784355  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.792582  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:17:42.801449  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:17:42.801518  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:17:42.809783  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:17:42.849635  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:17:42.849689  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:17:42.929073  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:17:42.929165  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:17:42.929199  527777 kubeadm.go:319] OS: Linux
	I1201 21:17:42.929243  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:17:42.929296  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:17:42.929342  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:17:42.929388  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:17:42.929435  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:17:42.929482  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:17:42.929526  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:17:42.929573  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:17:42.929617  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:17:43.002025  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:17:43.002165  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:17:43.002258  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:17:43.013458  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:17:43.017000  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:17:43.017095  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:17:43.017170  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:17:43.017252  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:17:43.017311  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:17:43.017379  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:17:43.017434  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:17:43.017501  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:17:43.017561  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:17:43.017634  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:17:43.017705  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:17:43.017832  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:17:43.017892  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:17:43.133992  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:17:43.467350  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:17:43.613021  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:17:43.910424  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:17:44.196121  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:17:44.196632  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:17:44.199145  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:17:44.202480  527777 out.go:252]   - Booting up control plane ...
	I1201 21:17:44.202575  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:17:44.202651  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:17:44.202718  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:17:44.217388  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:17:44.217714  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:17:44.228031  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:17:44.228400  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:17:44.228517  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:17:44.357408  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:17:44.357522  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:21:44.357404  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000240491s
	I1201 21:21:44.357429  527777 kubeadm.go:319] 
	I1201 21:21:44.357487  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:21:44.357523  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:21:44.357633  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:21:44.357637  527777 kubeadm.go:319] 
	I1201 21:21:44.357830  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:21:44.357863  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:21:44.357893  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:21:44.357896  527777 kubeadm.go:319] 
	I1201 21:21:44.361511  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.361943  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:44.362051  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:21:44.362287  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:21:44.362292  527777 kubeadm.go:319] 
	I1201 21:21:44.362361  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1201 21:21:44.362491  527777 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240491s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1201 21:21:44.362579  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:21:44.772977  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:21:44.786214  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:21:44.786270  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:21:44.794556  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:21:44.794568  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:21:44.794622  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:21:44.803048  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:21:44.803106  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:21:44.810695  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:21:44.818882  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:21:44.818947  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:21:44.827077  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.834936  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:21:44.834995  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.843074  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:21:44.851084  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:21:44.851166  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:21:44.858721  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:21:44.981319  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.981788  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:45.157392  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:25:46.243317  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:25:46.243344  527777 kubeadm.go:319] 
	I1201 21:25:46.243413  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 21:25:46.246817  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:25:46.246871  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:25:46.246962  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:25:46.247022  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:25:46.247057  527777 kubeadm.go:319] OS: Linux
	I1201 21:25:46.247100  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:25:46.247175  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:25:46.247246  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:25:46.247312  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:25:46.247369  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:25:46.247421  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:25:46.247464  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:25:46.247511  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:25:46.247555  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:25:46.247626  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:25:46.247719  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:25:46.247811  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:25:46.247872  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:25:46.250950  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:25:46.251041  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:25:46.251105  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:25:46.251224  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:25:46.251290  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:25:46.251369  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:25:46.251431  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:25:46.251495  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:25:46.251555  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:25:46.251629  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:25:46.251704  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:25:46.251741  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:25:46.251795  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:25:46.251845  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:25:46.251899  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:25:46.251951  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:25:46.252012  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:25:46.252065  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:25:46.252149  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:25:46.252213  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:25:46.255065  527777 out.go:252]   - Booting up control plane ...
	I1201 21:25:46.255213  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:25:46.255292  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:25:46.255359  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:25:46.255466  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:25:46.255590  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:25:46.255713  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:25:46.255816  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:25:46.255856  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:25:46.256011  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:25:46.256134  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:25:46.256200  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000272278s
	I1201 21:25:46.256203  527777 kubeadm.go:319] 
	I1201 21:25:46.256259  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:25:46.256290  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:25:46.256400  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:25:46.256404  527777 kubeadm.go:319] 
	I1201 21:25:46.256508  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:25:46.256540  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:25:46.256569  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:25:46.256592  527777 kubeadm.go:319] 
	I1201 21:25:46.256631  527777 kubeadm.go:403] duration metric: took 12m5.895739008s to StartCluster
	I1201 21:25:46.256661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:25:46.256721  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:25:46.286008  527777 cri.go:89] found id: ""
	I1201 21:25:46.286022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.286029  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:25:46.286034  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:25:46.286096  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:25:46.311936  527777 cri.go:89] found id: ""
	I1201 21:25:46.311950  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.311957  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:25:46.311963  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:25:46.312022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:25:46.338008  527777 cri.go:89] found id: ""
	I1201 21:25:46.338022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.338029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:25:46.338035  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:25:46.338094  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:25:46.364430  527777 cri.go:89] found id: ""
	I1201 21:25:46.364446  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.364453  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:25:46.364459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:25:46.364519  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:25:46.390553  527777 cri.go:89] found id: ""
	I1201 21:25:46.390568  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.390574  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:25:46.390580  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:25:46.390638  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:25:46.416135  527777 cri.go:89] found id: ""
	I1201 21:25:46.416149  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.416156  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:25:46.416161  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:25:46.416215  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:25:46.441110  527777 cri.go:89] found id: ""
	I1201 21:25:46.441124  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.441131  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:25:46.441139  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:25:46.441160  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:25:46.456311  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:25:46.456328  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:25:46.535568  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:25:46.535579  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:25:46.535591  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:25:46.613336  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:25:46.613357  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:25:46.643384  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:25:46.643410  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1201 21:25:46.714793  527777 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1201 21:25:46.714844  527777 out.go:285] * 
	W1201 21:25:46.714913  527777 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.714940  527777 out.go:285] * 
	W1201 21:25:46.717121  527777 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:25:46.722121  527777 out.go:203] 
	W1201 21:25:46.725981  527777 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.726037  527777 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1201 21:25:46.726060  527777 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1201 21:25:46.729457  527777 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:25:55 functional-198694 crio[10476]: time="2025-12-01T21:25:55.958248239Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=60b0690d-119a-4b74-971b-527f5644551b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002277819Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002458196Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002508862Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.001788017Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=1eef1f83-f41d-4072-8efe-21875777fc46 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038212447Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038444368Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038497101Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.071982198Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.072139634Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.072179354Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.449575902Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=9c839e7f-fb5f-4968-a4bc-4d98c332783b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.503799057Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.504091628Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.504215777Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534112776Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534244342Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534283193Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.532335101Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=cd62b92b-638f-4a1a-ae2d-ff287a877bee name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56735868Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56751706Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56755705Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.61237743Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.612552228Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.612623471Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:27:46.687618   23730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:46.688330   23730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:46.690103   23730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:46.690669   23730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:27:46.692396   23730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:27:46 up  3:10,  0 user,  load average: 0.79, 0.45, 0.47
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:27:43 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:27:44 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 798.
	Dec 01 21:27:44 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:44 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:44 functional-198694 kubelet[23617]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:44 functional-198694 kubelet[23617]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:44 functional-198694 kubelet[23617]: E1201 21:27:44.704734   23617 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:27:44 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:27:44 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:27:45 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 799.
	Dec 01 21:27:45 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:45 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:45 functional-198694 kubelet[23623]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:45 functional-198694 kubelet[23623]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:45 functional-198694 kubelet[23623]: E1201 21:27:45.472300   23623 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:27:45 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:27:45 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:27:46 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 800.
	Dec 01 21:27:46 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:46 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:27:46 functional-198694 kubelet[23644]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:46 functional-198694 kubelet[23644]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:27:46 functional-198694 kubelet[23644]: E1201 21:27:46.230722   23644 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:27:46 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:27:46 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (346.619977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1201 21:26:12.642586  486002 retry.go:31] will retry after 1.856909293s: Temporary Error: Get "http://10.111.187.170": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1201 21:26:24.500664  486002 retry.go:31] will retry after 2.598504304s: Temporary Error: Get "http://10.111.187.170": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1201 21:26:37.100831  486002 retry.go:31] will retry after 7.844098253s: Temporary Error: Get "http://10.111.187.170": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1201 21:26:54.946219  486002 retry.go:31] will retry after 8.416190148s: Temporary Error: Get "http://10.111.187.170": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1201 21:27:13.364012  486002 retry.go:31] will retry after 21.448620218s: Temporary Error: Get "http://10.111.187.170": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (336.839406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (344.635216ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-198694 ssh findmnt -T /mount-9p | grep 9p                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh            │ functional-198694 ssh findmnt -T /mount-9p | grep 9p                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ ssh            │ functional-198694 ssh -- ls -la /mount-9p                                                                                                     │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ ssh            │ functional-198694 ssh sudo umount -f /mount-9p                                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ mount          │ -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount1 --alsologtostderr -v=1          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ mount          │ -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount2 --alsologtostderr -v=1          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh            │ functional-198694 ssh findmnt -T /mount1                                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ mount          │ -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount3 --alsologtostderr -v=1          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ ssh            │ functional-198694 ssh findmnt -T /mount2                                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ ssh            │ functional-198694 ssh findmnt -T /mount3                                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │ 01 Dec 25 21:27 UTC │
	│ mount          │ -p functional-198694 --kill=true                                                                                                              │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ start          │ -p functional-198694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ start          │ -p functional-198694 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ start          │ -p functional-198694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-198694 --alsologtostderr -v=1                                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:27 UTC │                     │
	│ update-context │ functional-198694 update-context --alsologtostderr -v=2                                                                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │ 01 Dec 25 21:28 UTC │
	│ update-context │ functional-198694 update-context --alsologtostderr -v=2                                                                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │ 01 Dec 25 21:28 UTC │
	│ update-context │ functional-198694 update-context --alsologtostderr -v=2                                                                                       │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │ 01 Dec 25 21:28 UTC │
	│ image          │ functional-198694 image ls --format short --alsologtostderr                                                                                   │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │ 01 Dec 25 21:28 UTC │
	│ image          │ functional-198694 image ls --format yaml --alsologtostderr                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │ 01 Dec 25 21:28 UTC │
	│ ssh            │ functional-198694 ssh pgrep buildkitd                                                                                                         │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │                     │
	│ image          │ functional-198694 image build -t localhost/my-image:functional-198694 testdata/build --alsologtostderr                                        │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │ 01 Dec 25 21:28 UTC │
	│ image          │ functional-198694 image ls                                                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │ 01 Dec 25 21:28 UTC │
	│ image          │ functional-198694 image ls --format json --alsologtostderr                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │ 01 Dec 25 21:28 UTC │
	│ image          │ functional-198694 image ls --format table --alsologtostderr                                                                                   │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:28 UTC │ 01 Dec 25 21:28 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:27:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:27:58.101372  546398 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:27:58.101548  546398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:27:58.101555  546398 out.go:374] Setting ErrFile to fd 2...
	I1201 21:27:58.101561  546398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:27:58.101999  546398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:27:58.102413  546398 out.go:368] Setting JSON to false
	I1201 21:27:58.103413  546398 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11428,"bootTime":1764613051,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:27:58.103489  546398 start.go:143] virtualization:  
	I1201 21:27:58.106728  546398 out.go:179] * [functional-198694] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1201 21:27:58.110431  546398 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:27:58.110606  546398 notify.go:221] Checking for updates...
	I1201 21:27:58.116551  546398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:27:58.119475  546398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:27:58.122388  546398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:27:58.125369  546398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:27:58.128308  546398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:27:58.131852  546398 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:27:58.132449  546398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:27:58.170277  546398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:27:58.170455  546398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:27:58.267551  546398 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:27:58.257057975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:27:58.267677  546398 docker.go:319] overlay module found
	I1201 21:27:58.271024  546398 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1201 21:27:58.274017  546398 start.go:309] selected driver: docker
	I1201 21:27:58.274047  546398 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:27:58.274175  546398 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:27:58.277846  546398 out.go:203] 
	W1201 21:27:58.280883  546398 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1201 21:27:58.283947  546398 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:25:55 functional-198694 crio[10476]: time="2025-12-01T21:25:55.958248239Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=60b0690d-119a-4b74-971b-527f5644551b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002277819Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002458196Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002508862Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.001788017Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=1eef1f83-f41d-4072-8efe-21875777fc46 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038212447Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038444368Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.038497101Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=f5d9dd7b-f748-4c02-b084-bf73e2e48bd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.071982198Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.072139634Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:57 functional-198694 crio[10476]: time="2025-12-01T21:25:57.072179354Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=1a2cb465-f3f7-4483-9653-cf24325561b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.449575902Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=9c839e7f-fb5f-4968-a4bc-4d98c332783b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.503799057Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.504091628Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.504215777Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=1fbb90ce-8cb8-4806-9a58-81e0c582e6a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534112776Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534244342Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:58 functional-198694 crio[10476]: time="2025-12-01T21:25:58.534283193Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=7580a9c0-62f6-4f3e-8517-39fe2d123618 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.532335101Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=cd62b92b-638f-4a1a-ae2d-ff287a877bee name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56735868Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56751706Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.56755705Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=c7e91211-bb1c-46f2-bcd8-a92da093ad25 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.61237743Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.612552228Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:59 functional-198694 crio[10476]: time="2025-12-01T21:25:59.612623471Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=f983679c-8adf-4453-8d30-a89c874062b7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:30:06.549933   25843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:30:06.550796   25843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:30:06.552512   25843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:30:06.553202   25843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:30:06.554858   25843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:30:06 up  3:12,  0 user,  load average: 0.35, 0.42, 0.46
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:30:04 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:30:04 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 985.
	Dec 01 21:30:04 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:30:04 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:30:04 functional-198694 kubelet[25723]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:30:04 functional-198694 kubelet[25723]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:30:04 functional-198694 kubelet[25723]: E1201 21:30:04.959221   25723 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:30:04 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:30:04 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:30:05 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 986.
	Dec 01 21:30:05 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:30:05 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:30:05 functional-198694 kubelet[25743]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:30:05 functional-198694 kubelet[25743]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:30:05 functional-198694 kubelet[25743]: E1201 21:30:05.740182   25743 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:30:05 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:30:05 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:30:06 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 987.
	Dec 01 21:30:06 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:30:06 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:30:06 functional-198694 kubelet[25825]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:30:06 functional-198694 kubelet[25825]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:30:06 functional-198694 kubelet[25825]: E1201 21:30:06.472089   25825 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:30:06 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:30:06 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (325.080973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-198694 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-198694 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (82.030915ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-198694 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-198694
helpers_test.go:243: (dbg) docker inspect functional-198694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	        "Created": "2025-12-01T20:58:43.365574809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:58:43.423541772Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/hosts",
	        "LogPath": "/var/lib/docker/containers/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8/e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8-json.log",
	        "Name": "/functional-198694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-198694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-198694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e545295bd958e0e0dd446609d97495fdeae8af9ef2210670201c6c51de76cda8",
	                "LowerDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a2bc04a0c4ffb45dd2beca951c359bd8250ec1f2e1418489c344a3122541e26/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-198694",
	                "Source": "/var/lib/docker/volumes/functional-198694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-198694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-198694",
	                "name.minikube.sigs.k8s.io": "functional-198694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb3cb57c35171bfce361b9e0de9c9f36ef89baf5e4ad0dd73159d10f1056820",
	            "SandboxKey": "/var/run/docker/netns/8cb3cb57c351",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-198694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:9a:72:4c:a4:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9750c903db8645b2871ee2eb6fd897b77e607b9a995005513c7bcf81da63c819",
	                    "EndpointID": "884d9ec9fdfc44c10ccd4516f4ea05a765fb3ccb2118db0e8af2392e8613c402",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-198694",
	                        "e545295bd958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-198694 -n functional-198694: exit status 2 (426.614519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 logs -n 25: (1.345070444s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-198694 ssh sudo crictl images                                                                                                                     │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	│ cache   │ functional-198694 cache reload                                                                                                                               │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ ssh     │ functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │ 01 Dec 25 21:13 UTC │
	│ kubectl │ functional-198694 kubectl -- --context functional-198694 get pods                                                                                            │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	│ start   │ -p functional-198694 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:13 UTC │                     │
	│ cp      │ functional-198694 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ config  │ functional-198694 config unset cpus                                                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ config  │ functional-198694 config get cpus                                                                                                                            │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │                     │
	│ config  │ functional-198694 config set cpus 2                                                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ config  │ functional-198694 config get cpus                                                                                                                            │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ config  │ functional-198694 config unset cpus                                                                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh -n functional-198694 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ config  │ functional-198694 config get cpus                                                                                                                            │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │                     │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ cp      │ functional-198694 cp functional-198694:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1037925037/001/cp-test.txt │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo systemctl is-active docker                                                                                                        │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │                     │
	│ ssh     │ functional-198694 ssh -n functional-198694 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh sudo systemctl is-active containerd                                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │                     │
	│ cp      │ functional-198694 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ ssh     │ functional-198694 ssh -n functional-198694 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │ 01 Dec 25 21:25 UTC │
	│ image   │ functional-198694 image load --daemon kicbase/echo-server:functional-198694 --alsologtostderr                                                                │ functional-198694 │ jenkins │ v1.37.0 │ 01 Dec 25 21:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 21:13:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 21:13:35.338314  527777 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:13:35.338426  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.338431  527777 out.go:374] Setting ErrFile to fd 2...
	I1201 21:13:35.338435  527777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:13:35.339011  527777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:13:35.339669  527777 out.go:368] Setting JSON to false
	I1201 21:13:35.340628  527777 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10565,"bootTime":1764613051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:13:35.340767  527777 start.go:143] virtualization:  
	I1201 21:13:35.344231  527777 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:13:35.348003  527777 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:13:35.348182  527777 notify.go:221] Checking for updates...
	I1201 21:13:35.353585  527777 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:13:35.356421  527777 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:13:35.359084  527777 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:13:35.361859  527777 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:13:35.364606  527777 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:13:35.367906  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:35.368004  527777 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:13:35.404299  527777 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:13:35.404422  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.463515  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.453981974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.463609  527777 docker.go:319] overlay module found
	I1201 21:13:35.466875  527777 out.go:179] * Using the docker driver based on existing profile
	I1201 21:13:35.469781  527777 start.go:309] selected driver: docker
	I1201 21:13:35.469793  527777 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.469882  527777 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:13:35.469988  527777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:13:35.530406  527777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-01 21:13:35.520549629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:13:35.530815  527777 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 21:13:35.530841  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:35.530897  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:35.530938  527777 start.go:353] cluster config:
	{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:35.534086  527777 out.go:179] * Starting "functional-198694" primary control-plane node in "functional-198694" cluster
	I1201 21:13:35.536995  527777 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 21:13:35.539929  527777 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 21:13:35.542786  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:35.542873  527777 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 21:13:35.563189  527777 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 21:13:35.563200  527777 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 21:13:35.608993  527777 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 21:13:35.806403  527777 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 21:13:35.806571  527777 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/config.json ...
	I1201 21:13:35.806600  527777 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806692  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 21:13:35.806702  527777 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.653µs
	I1201 21:13:35.806710  527777 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 21:13:35.806721  527777 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806753  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 21:13:35.806758  527777 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 38.825µs
	I1201 21:13:35.806764  527777 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806774  527777 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806815  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 21:13:35.806831  527777 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 48.901µs
	I1201 21:13:35.806838  527777 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806850  527777 cache.go:243] Successfully downloaded all kic artifacts
	I1201 21:13:35.806851  527777 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806885  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 21:13:35.806880  527777 start.go:360] acquireMachinesLock for functional-198694: {Name:mk75190be8638b73bbf357fb21be879be3d32136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806893  527777 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 44.405µs
	I1201 21:13:35.806899  527777 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806914  527777 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806939  527777 start.go:364] duration metric: took 38.547µs to acquireMachinesLock for "functional-198694"
	I1201 21:13:35.806944  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 21:13:35.806949  527777 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 42.124µs
	I1201 21:13:35.806954  527777 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 21:13:35.806962  527777 start.go:96] Skipping create...Using existing machine configuration
	I1201 21:13:35.806968  527777 fix.go:54] fixHost starting: 
	I1201 21:13:35.806963  527777 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.806991  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 21:13:35.806995  527777 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.558µs
	I1201 21:13:35.807007  527777 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 21:13:35.807016  527777 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807045  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 21:13:35.807049  527777 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.657µs
	I1201 21:13:35.807054  527777 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 21:13:35.807062  527777 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 21:13:35.807089  527777 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 21:13:35.807094  527777 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.54µs
	I1201 21:13:35.807099  527777 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 21:13:35.807107  527777 cache.go:87] Successfully saved all images to host disk.
	I1201 21:13:35.807314  527777 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:13:35.826290  527777 fix.go:112] recreateIfNeeded on functional-198694: state=Running err=<nil>
	W1201 21:13:35.826315  527777 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 21:13:35.829729  527777 out.go:252] * Updating the running docker "functional-198694" container ...
	I1201 21:13:35.829761  527777 machine.go:94] provisionDockerMachine start ...
	I1201 21:13:35.829853  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:35.849270  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:35.849646  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:35.849655  527777 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 21:13:36.014195  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.014211  527777 ubuntu.go:182] provisioning hostname "functional-198694"
	I1201 21:13:36.014280  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.035339  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.035672  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.035681  527777 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-198694 && echo "functional-198694" | sudo tee /etc/hostname
	I1201 21:13:36.197202  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-198694
	
	I1201 21:13:36.197287  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.217632  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:36.217935  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:36.217948  527777 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-198694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-198694/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-198694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 21:13:36.367610  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 21:13:36.367629  527777 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 21:13:36.367658  527777 ubuntu.go:190] setting up certificates
	I1201 21:13:36.367666  527777 provision.go:84] configureAuth start
	I1201 21:13:36.367747  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:36.387555  527777 provision.go:143] copyHostCerts
	I1201 21:13:36.387627  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 21:13:36.387641  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 21:13:36.387724  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 21:13:36.387835  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 21:13:36.387840  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 21:13:36.387866  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 21:13:36.387928  527777 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 21:13:36.387933  527777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 21:13:36.387959  527777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 21:13:36.388014  527777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.functional-198694 san=[127.0.0.1 192.168.49.2 functional-198694 localhost minikube]
	I1201 21:13:36.864413  527777 provision.go:177] copyRemoteCerts
	I1201 21:13:36.864488  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 21:13:36.864542  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:36.883147  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:36.987572  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 21:13:37.015924  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 21:13:37.037590  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 21:13:37.056483  527777 provision.go:87] duration metric: took 688.787749ms to configureAuth
	I1201 21:13:37.056502  527777 ubuntu.go:206] setting minikube options for container-runtime
	I1201 21:13:37.056696  527777 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:13:37.056802  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.075104  527777 main.go:143] libmachine: Using SSH client type: native
	I1201 21:13:37.075454  527777 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1201 21:13:37.075468  527777 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 21:13:37.432424  527777 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 21:13:37.432439  527777 machine.go:97] duration metric: took 1.602671146s to provisionDockerMachine
	I1201 21:13:37.432451  527777 start.go:293] postStartSetup for "functional-198694" (driver="docker")
	I1201 21:13:37.432466  527777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 21:13:37.432544  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 21:13:37.432606  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.457485  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.563609  527777 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 21:13:37.567292  527777 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 21:13:37.567310  527777 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 21:13:37.567329  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 21:13:37.567430  527777 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 21:13:37.567517  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 21:13:37.567613  527777 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts -> hosts in /etc/test/nested/copy/486002
	I1201 21:13:37.567670  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/486002
	I1201 21:13:37.575725  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:37.593481  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts --> /etc/test/nested/copy/486002/hosts (40 bytes)
	I1201 21:13:37.611620  527777 start.go:296] duration metric: took 179.151488ms for postStartSetup
	I1201 21:13:37.611718  527777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:13:37.611798  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.629587  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.732362  527777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 21:13:37.737388  527777 fix.go:56] duration metric: took 1.930412863s for fixHost
	I1201 21:13:37.737414  527777 start.go:83] releasing machines lock for "functional-198694", held for 1.930466515s
	I1201 21:13:37.737492  527777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-198694
	I1201 21:13:37.754641  527777 ssh_runner.go:195] Run: cat /version.json
	I1201 21:13:37.754685  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.754954  527777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 21:13:37.755010  527777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:13:37.773486  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.787845  527777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:13:37.875124  527777 ssh_runner.go:195] Run: systemctl --version
	I1201 21:13:37.974016  527777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 21:13:38.017000  527777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 21:13:38.021875  527777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 21:13:38.021957  527777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 21:13:38.031594  527777 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 21:13:38.031622  527777 start.go:496] detecting cgroup driver to use...
	I1201 21:13:38.031660  527777 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 21:13:38.031747  527777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 21:13:38.049187  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 21:13:38.064637  527777 docker.go:218] disabling cri-docker service (if available) ...
	I1201 21:13:38.064721  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 21:13:38.083239  527777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 21:13:38.097453  527777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 21:13:38.249215  527777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 21:13:38.371691  527777 docker.go:234] disabling docker service ...
	I1201 21:13:38.371769  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 21:13:38.388782  527777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 21:13:38.402306  527777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 21:13:38.513914  527777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 21:13:38.630153  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 21:13:38.644475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 21:13:38.658966  527777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 21:13:38.659023  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.668135  527777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 21:13:38.668192  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.677509  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.686682  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.695781  527777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 21:13:38.704147  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.713420  527777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.722196  527777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 21:13:38.731481  527777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 21:13:38.740144  527777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 21:13:38.748176  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:38.858298  527777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 21:13:39.035375  527777 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 21:13:39.035464  527777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 21:13:39.039668  527777 start.go:564] Will wait 60s for crictl version
	I1201 21:13:39.039730  527777 ssh_runner.go:195] Run: which crictl
	I1201 21:13:39.043260  527777 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 21:13:39.078386  527777 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 21:13:39.078499  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.110667  527777 ssh_runner.go:195] Run: crio --version
	I1201 21:13:39.146750  527777 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 21:13:39.149800  527777 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 21:13:39.166717  527777 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 21:13:39.173972  527777 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1201 21:13:39.176755  527777 kubeadm.go:884] updating cluster {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 21:13:39.176898  527777 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 21:13:39.176968  527777 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 21:13:39.210945  527777 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 21:13:39.210958  527777 cache_images.go:86] Images are preloaded, skipping loading
	I1201 21:13:39.210965  527777 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1201 21:13:39.211070  527777 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-198694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 21:13:39.211187  527777 ssh_runner.go:195] Run: crio config
	I1201 21:13:39.284437  527777 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1201 21:13:39.284481  527777 cni.go:84] Creating CNI manager for ""
	I1201 21:13:39.284491  527777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 21:13:39.284499  527777 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 21:13:39.284522  527777 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-198694 NodeName:functional-198694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 21:13:39.284675  527777 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-198694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 21:13:39.284759  527777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 21:13:39.293198  527777 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 21:13:39.293275  527777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 21:13:39.301290  527777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 21:13:39.315108  527777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 21:13:39.329814  527777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1201 21:13:39.343669  527777 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 21:13:39.347900  527777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 21:13:39.461077  527777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 21:13:39.654352  527777 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694 for IP: 192.168.49.2
	I1201 21:13:39.654364  527777 certs.go:195] generating shared ca certs ...
	I1201 21:13:39.654379  527777 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 21:13:39.654515  527777 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 21:13:39.654555  527777 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 21:13:39.654570  527777 certs.go:257] generating profile certs ...
	I1201 21:13:39.654666  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.key
	I1201 21:13:39.654727  527777 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key.ab5f5a28
	I1201 21:13:39.654771  527777 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key
	I1201 21:13:39.654890  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 21:13:39.654921  527777 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 21:13:39.654928  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 21:13:39.654965  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 21:13:39.655015  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 21:13:39.655038  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 21:13:39.655084  527777 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 21:13:39.655762  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 21:13:39.683427  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 21:13:39.704542  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 21:13:39.724282  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 21:13:39.744046  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 21:13:39.765204  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 21:13:39.784677  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 21:13:39.803885  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 21:13:39.822965  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 21:13:39.842026  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 21:13:39.860451  527777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 21:13:39.879380  527777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 21:13:39.893847  527777 ssh_runner.go:195] Run: openssl version
	I1201 21:13:39.900456  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 21:13:39.910454  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914599  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.914672  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 21:13:39.957573  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 21:13:39.966576  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 21:13:39.976178  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980649  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 21:13:39.980729  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 21:13:40.025575  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 21:13:40.037195  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 21:13:40.047283  527777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051903  527777 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.051976  527777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 21:13:40.094396  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 21:13:40.103155  527777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 21:13:40.107392  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 21:13:40.150081  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 21:13:40.192825  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 21:13:40.234772  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 21:13:40.276722  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 21:13:40.318487  527777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 21:13:40.360912  527777 kubeadm.go:401] StartCluster: {Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:13:40.361001  527777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 21:13:40.361062  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.390972  527777 cri.go:89] found id: ""
	I1201 21:13:40.391046  527777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 21:13:40.399343  527777 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 21:13:40.399354  527777 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 21:13:40.399410  527777 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 21:13:40.407260  527777 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.407785  527777 kubeconfig.go:125] found "functional-198694" server: "https://192.168.49.2:8441"
	I1201 21:13:40.409130  527777 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 21:13:40.418081  527777 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-01 20:59:03.175067800 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-01 21:13:39.337074315 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1201 21:13:40.418090  527777 kubeadm.go:1161] stopping kube-system containers ...
	I1201 21:13:40.418103  527777 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 21:13:40.418160  527777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 21:13:40.458573  527777 cri.go:89] found id: ""
	I1201 21:13:40.458639  527777 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 21:13:40.477506  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:13:40.486524  527777 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  1 21:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  1 21:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  1 21:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  1 21:03 /etc/kubernetes/scheduler.conf
	
	I1201 21:13:40.486611  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:13:40.494590  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:13:40.502887  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.502952  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:13:40.511354  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.519815  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.519872  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:13:40.528897  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:13:40.537744  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 21:13:40.537819  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:13:40.546165  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:13:40.555103  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:40.603848  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:41.842196  527777 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.238322261s)
	I1201 21:13:41.842271  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.059194  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.130722  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 21:13:42.199813  527777 api_server.go:52] waiting for apiserver process to appear ...
	I1201 21:13:42.199901  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:42.700072  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.200731  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:43.700027  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.200776  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:44.700945  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.200498  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:45.700869  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.200358  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:46.700900  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.200833  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:47.700432  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.200342  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:48.700205  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.200031  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:49.700873  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.200171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:50.700532  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.199969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:51.700026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.200123  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:52.700046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.200038  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:53.700680  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:54.700097  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.200910  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:55.700336  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.200957  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:56.700757  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.200131  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:57.700100  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.200357  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:58.700032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.200053  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:13:59.700687  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.202701  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:00.700294  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.200032  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:01.700969  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.200893  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:02.700398  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.200784  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:03.701004  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.200950  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:04.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.200806  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:05.700896  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.200904  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:06.700082  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.200046  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:07.700894  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.200914  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:08.700874  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.200345  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:09.700662  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.200989  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:10.700974  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.200085  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:11.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.200389  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:12.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.200064  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:13.700099  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.200140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:14.699984  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.200508  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:15.700076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.200220  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:16.700081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.200107  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:17.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.201026  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:18.700092  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.200816  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:19.700821  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.200768  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:20.700817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.200081  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:21.700135  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.200076  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:22.700140  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.200109  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:23.700040  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:24.700221  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.200360  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:25.700585  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.200737  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:26.700431  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.200635  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:27.699983  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.200340  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:28.700127  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.200075  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:29.700352  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.200740  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:30.700086  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.200338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:31.700759  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.200785  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:32.700903  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.200627  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:33.700920  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.200039  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:34.700285  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.200800  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:35.700353  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.200091  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:36.700843  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.200016  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:37.700190  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.200098  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:38.700171  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.200767  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:39.700973  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.200048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:40.700746  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.200808  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:41.700037  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:42.200288  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:42.200384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:42.231074  527777 cri.go:89] found id: ""
	I1201 21:14:42.231090  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.231099  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:42.231105  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:42.231205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:42.260877  527777 cri.go:89] found id: ""
	I1201 21:14:42.260892  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.260900  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:42.260906  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:42.260972  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:42.290930  527777 cri.go:89] found id: ""
	I1201 21:14:42.290944  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.290953  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:42.290960  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:42.291034  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:42.323761  527777 cri.go:89] found id: ""
	I1201 21:14:42.323776  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.323784  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:42.323790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:42.323870  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:42.356722  527777 cri.go:89] found id: ""
	I1201 21:14:42.356738  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.356748  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:42.356756  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:42.356820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:42.387639  527777 cri.go:89] found id: ""
	I1201 21:14:42.387654  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.387661  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:42.387667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:42.387738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:42.433777  527777 cri.go:89] found id: ""
	I1201 21:14:42.433791  527777 logs.go:282] 0 containers: []
	W1201 21:14:42.433798  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:42.433806  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:42.433815  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:42.520716  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:42.520743  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:42.536803  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:42.536820  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:42.605090  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:42.597365   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.598034   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.599719   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.600043   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:42.601473   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:42.605114  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:42.605125  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:42.679935  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:42.679957  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:45.213941  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:45.229905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:45.229984  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:45.276158  527777 cri.go:89] found id: ""
	I1201 21:14:45.276174  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.276181  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:45.276187  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:45.276259  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:45.307844  527777 cri.go:89] found id: ""
	I1201 21:14:45.307859  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.307867  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:45.307872  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:45.307946  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:45.339831  527777 cri.go:89] found id: ""
	I1201 21:14:45.339845  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.339853  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:45.339858  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:45.339922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:45.371617  527777 cri.go:89] found id: ""
	I1201 21:14:45.371632  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.371640  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:45.371646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:45.371705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:45.399984  527777 cri.go:89] found id: ""
	I1201 21:14:45.400005  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.400012  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:45.400017  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:45.400086  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:45.441742  527777 cri.go:89] found id: ""
	I1201 21:14:45.441755  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.441763  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:45.441769  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:45.441843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:45.474201  527777 cri.go:89] found id: ""
	I1201 21:14:45.474216  527777 logs.go:282] 0 containers: []
	W1201 21:14:45.474223  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:45.474231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:45.474241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:45.541899  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:45.541920  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:45.557525  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:45.557541  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:45.623123  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:45.614602   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.615281   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.616956   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.617711   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:45.619627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:45.623165  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:45.623176  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:45.703324  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:45.703344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.232324  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:48.242709  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:48.242767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:48.273768  527777 cri.go:89] found id: ""
	I1201 21:14:48.273782  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.273790  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:48.273795  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:48.273853  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:48.305133  527777 cri.go:89] found id: ""
	I1201 21:14:48.305147  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.305154  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:48.305159  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:48.305218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:48.331706  527777 cri.go:89] found id: ""
	I1201 21:14:48.331720  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.331727  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:48.331733  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:48.331805  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:48.357401  527777 cri.go:89] found id: ""
	I1201 21:14:48.357414  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.357421  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:48.357426  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:48.357485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:48.382601  527777 cri.go:89] found id: ""
	I1201 21:14:48.382615  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.382622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:48.382627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:48.382685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:48.414103  527777 cri.go:89] found id: ""
	I1201 21:14:48.414117  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.414124  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:48.414130  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:48.414192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:48.444275  527777 cri.go:89] found id: ""
	I1201 21:14:48.444289  527777 logs.go:282] 0 containers: []
	W1201 21:14:48.444296  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:48.444304  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:48.444315  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:48.509613  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:48.500550   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.501177   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.502982   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.503577   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:48.505352   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:48.509633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:48.509645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:48.583849  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:48.583868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:48.611095  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:48.611113  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:48.678045  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:48.678067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.193681  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:51.204158  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:51.204220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:51.228546  527777 cri.go:89] found id: ""
	I1201 21:14:51.228560  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.228567  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:51.228573  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:51.228641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:51.253363  527777 cri.go:89] found id: ""
	I1201 21:14:51.253377  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.253384  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:51.253389  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:51.253450  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:51.281388  527777 cri.go:89] found id: ""
	I1201 21:14:51.281403  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.281410  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:51.281415  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:51.281472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:51.312321  527777 cri.go:89] found id: ""
	I1201 21:14:51.312334  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.312341  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:51.312347  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:51.312404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:51.338071  527777 cri.go:89] found id: ""
	I1201 21:14:51.338084  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.338092  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:51.338097  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:51.338160  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:51.362911  527777 cri.go:89] found id: ""
	I1201 21:14:51.362925  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.362932  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:51.362938  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:51.362996  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:51.392560  527777 cri.go:89] found id: ""
	I1201 21:14:51.392575  527777 logs.go:282] 0 containers: []
	W1201 21:14:51.392582  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:51.392589  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:51.392600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:51.462446  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:51.462465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:51.483328  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:51.483345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:51.550537  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:51.542392   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.543042   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.544572   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.545190   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:51.546918   11851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:51.550546  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:51.550556  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:51.627463  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:51.627484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:54.160747  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:54.171038  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:54.171098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:54.197306  527777 cri.go:89] found id: ""
	I1201 21:14:54.197320  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.197327  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:54.197333  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:54.197389  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:54.227205  527777 cri.go:89] found id: ""
	I1201 21:14:54.227219  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.227226  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:54.227232  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:54.227293  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:54.254126  527777 cri.go:89] found id: ""
	I1201 21:14:54.254141  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.254149  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:54.254156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:54.254218  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:54.282152  527777 cri.go:89] found id: ""
	I1201 21:14:54.282166  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.282173  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:54.282178  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:54.282234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:54.312220  527777 cri.go:89] found id: ""
	I1201 21:14:54.312234  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.312241  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:54.312246  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:54.312314  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:54.338233  527777 cri.go:89] found id: ""
	I1201 21:14:54.338247  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.338253  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:54.338259  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:54.338317  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:54.364068  527777 cri.go:89] found id: ""
	I1201 21:14:54.364082  527777 logs.go:282] 0 containers: []
	W1201 21:14:54.364089  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:54.364097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:54.364119  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:54.429655  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:54.429673  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:54.445696  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:54.445712  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:54.514079  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:54.504989   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.506549   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.507008   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508528   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:54.508981   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:54.514090  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:54.514100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:14:54.590504  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:54.590526  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.119842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:14:57.129802  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:14:57.129862  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:14:57.154250  527777 cri.go:89] found id: ""
	I1201 21:14:57.154263  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.154271  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:14:57.154276  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:14:57.154332  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:14:57.179738  527777 cri.go:89] found id: ""
	I1201 21:14:57.179761  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.179768  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:14:57.179775  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:14:57.179838  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:14:57.209881  527777 cri.go:89] found id: ""
	I1201 21:14:57.209895  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.209902  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:14:57.209907  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:14:57.209964  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:14:57.239761  527777 cri.go:89] found id: ""
	I1201 21:14:57.239775  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.239782  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:14:57.239787  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:14:57.239851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:14:57.265438  527777 cri.go:89] found id: ""
	I1201 21:14:57.265457  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.265464  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:14:57.265470  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:14:57.265531  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:14:57.292117  527777 cri.go:89] found id: ""
	I1201 21:14:57.292131  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.292139  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:14:57.292145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:14:57.292211  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:14:57.321507  527777 cri.go:89] found id: ""
	I1201 21:14:57.321526  527777 logs.go:282] 0 containers: []
	W1201 21:14:57.321539  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:14:57.321547  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:14:57.321562  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:14:57.355489  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:14:57.355506  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:14:57.422253  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:14:57.422274  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:14:57.439866  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:14:57.439884  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:14:57.517974  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:14:57.510196   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.510601   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512297   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.512646   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:14:57.514195   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:14:57.517984  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:14:57.517997  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.095116  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:00.167383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:00.167484  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:00.305857  527777 cri.go:89] found id: ""
	I1201 21:15:00.305874  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.305881  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:00.305888  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:00.305960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:00.412948  527777 cri.go:89] found id: ""
	I1201 21:15:00.412964  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.412972  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:00.412979  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:00.413063  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:00.497486  527777 cri.go:89] found id: ""
	I1201 21:15:00.497503  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.497511  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:00.497517  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:00.497588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:00.548544  527777 cri.go:89] found id: ""
	I1201 21:15:00.548558  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.548565  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:00.548571  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:00.548635  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:00.594658  527777 cri.go:89] found id: ""
	I1201 21:15:00.594674  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.594682  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:00.594688  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:00.594758  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:00.625642  527777 cri.go:89] found id: ""
	I1201 21:15:00.625658  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.625665  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:00.625672  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:00.625741  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:00.657944  527777 cri.go:89] found id: ""
	I1201 21:15:00.657968  527777 logs.go:282] 0 containers: []
	W1201 21:15:00.657977  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:00.657987  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:00.657999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:00.741394  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:00.730733   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.731901   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.732998   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.734744   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:00.736546   12156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:00.741407  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:00.741425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:00.821320  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:00.821344  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:00.857348  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:00.857380  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:00.927631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:00.927652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.446387  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:03.456673  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:03.456742  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:03.481752  527777 cri.go:89] found id: ""
	I1201 21:15:03.481766  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.481773  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:03.481779  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:03.481837  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:03.509959  527777 cri.go:89] found id: ""
	I1201 21:15:03.509974  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.509982  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:03.509987  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:03.510050  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:03.536645  527777 cri.go:89] found id: ""
	I1201 21:15:03.536659  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.536665  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:03.536671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:03.536738  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:03.562917  527777 cri.go:89] found id: ""
	I1201 21:15:03.562932  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.562939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:03.562945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:03.563005  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:03.589891  527777 cri.go:89] found id: ""
	I1201 21:15:03.589905  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.589912  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:03.589918  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:03.589977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:03.622362  527777 cri.go:89] found id: ""
	I1201 21:15:03.622376  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.622384  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:03.622390  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:03.622451  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:03.649882  527777 cri.go:89] found id: ""
	I1201 21:15:03.649897  527777 logs.go:282] 0 containers: []
	W1201 21:15:03.649904  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:03.649912  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:03.649922  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:03.726812  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:03.726832  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:03.741643  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:03.741659  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:03.807830  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:03.800226   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.800973   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802491   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.802813   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:03.804371   12266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:03.807840  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:03.807851  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:03.882248  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:03.882268  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.412792  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:06.423457  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:06.423520  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:06.450416  527777 cri.go:89] found id: ""
	I1201 21:15:06.450434  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.450441  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:06.450461  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:06.450552  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:06.476229  527777 cri.go:89] found id: ""
	I1201 21:15:06.476243  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.476251  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:06.476257  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:06.476313  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:06.504311  527777 cri.go:89] found id: ""
	I1201 21:15:06.504326  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.504333  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:06.504339  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:06.504400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:06.531500  527777 cri.go:89] found id: ""
	I1201 21:15:06.531515  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.531523  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:06.531529  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:06.531598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:06.557205  527777 cri.go:89] found id: ""
	I1201 21:15:06.557219  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.557226  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:06.557231  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:06.557296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:06.583224  527777 cri.go:89] found id: ""
	I1201 21:15:06.583237  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.583244  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:06.583250  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:06.583309  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:06.609560  527777 cri.go:89] found id: ""
	I1201 21:15:06.609574  527777 logs.go:282] 0 containers: []
	W1201 21:15:06.609581  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:06.609589  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:06.609600  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:06.688119  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:06.688138  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:06.718171  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:06.718187  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:06.788360  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:06.788382  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:06.803516  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:06.803532  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:06.871576  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:06.863363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.864057   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.865787   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.866363   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:06.867937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.373262  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:09.384129  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:09.384191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:09.415353  527777 cri.go:89] found id: ""
	I1201 21:15:09.415369  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.415377  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:09.415384  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:09.415449  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:09.441666  527777 cri.go:89] found id: ""
	I1201 21:15:09.441681  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.441689  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:09.441707  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:09.441773  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:09.468735  527777 cri.go:89] found id: ""
	I1201 21:15:09.468749  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.468756  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:09.468761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:09.468820  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:09.495871  527777 cri.go:89] found id: ""
	I1201 21:15:09.495885  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.495892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:09.495898  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:09.495960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:09.522124  527777 cri.go:89] found id: ""
	I1201 21:15:09.522138  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.522145  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:09.522151  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:09.522222  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:09.548540  527777 cri.go:89] found id: ""
	I1201 21:15:09.548554  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.548562  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:09.548568  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:09.548628  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:09.581799  527777 cri.go:89] found id: ""
	I1201 21:15:09.581814  527777 logs.go:282] 0 containers: []
	W1201 21:15:09.581823  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:09.581831  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:09.581842  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:09.653172  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:09.653196  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:09.668649  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:09.668666  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:09.742062  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:09.733951   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.734515   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736072   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.736575   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:09.738046   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:09.742072  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:09.742085  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:09.817239  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:09.817259  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.348410  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:12.358969  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:12.359036  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:12.384762  527777 cri.go:89] found id: ""
	I1201 21:15:12.384776  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.384783  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:12.384788  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:12.384849  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:12.411423  527777 cri.go:89] found id: ""
	I1201 21:15:12.411437  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.411444  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:12.411449  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:12.411508  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:12.436624  527777 cri.go:89] found id: ""
	I1201 21:15:12.436638  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.436645  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:12.436650  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:12.436708  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:12.462632  527777 cri.go:89] found id: ""
	I1201 21:15:12.462647  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.462654  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:12.462661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:12.462724  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:12.488511  527777 cri.go:89] found id: ""
	I1201 21:15:12.488526  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.488537  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:12.488542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:12.488601  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:12.514421  527777 cri.go:89] found id: ""
	I1201 21:15:12.514434  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.514441  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:12.514448  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:12.514513  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:12.541557  527777 cri.go:89] found id: ""
	I1201 21:15:12.541571  527777 logs.go:282] 0 containers: []
	W1201 21:15:12.541579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:12.541587  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:12.541598  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:12.573231  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:12.573249  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:12.641686  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:12.641707  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:12.658713  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:12.658727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:12.743144  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:12.734976   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.735722   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737218   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.737705   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:12.739191   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:12.743155  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:12.743166  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.318465  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:15.329023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:15.329088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:15.358063  527777 cri.go:89] found id: ""
	I1201 21:15:15.358077  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.358084  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:15.358090  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:15.358148  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:15.387949  527777 cri.go:89] found id: ""
	I1201 21:15:15.387963  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.387971  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:15.387976  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:15.388040  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:15.414396  527777 cri.go:89] found id: ""
	I1201 21:15:15.414412  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.414420  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:15.414425  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:15.414489  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:15.440368  527777 cri.go:89] found id: ""
	I1201 21:15:15.440383  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.440390  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:15.440396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:15.440455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:15.471515  527777 cri.go:89] found id: ""
	I1201 21:15:15.471529  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.471538  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:15.471544  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:15.471605  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:15.502736  527777 cri.go:89] found id: ""
	I1201 21:15:15.502750  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.502764  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:15.502770  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:15.502834  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:15.530525  527777 cri.go:89] found id: ""
	I1201 21:15:15.530540  527777 logs.go:282] 0 containers: []
	W1201 21:15:15.530548  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:15.530555  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:15.530566  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:15.597211  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:15.588836   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.589648   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591302   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.591840   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:15.593419   12676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:15.597221  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:15.597232  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:15.673960  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:15.673983  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:15.708635  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:15.708651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:15.779672  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:15.779693  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.296490  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:18.307184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:18.307258  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:18.340992  527777 cri.go:89] found id: ""
	I1201 21:15:18.341006  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.341021  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:18.341027  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:18.341093  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:18.370602  527777 cri.go:89] found id: ""
	I1201 21:15:18.370626  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.370633  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:18.370642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:18.370713  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:18.398425  527777 cri.go:89] found id: ""
	I1201 21:15:18.398440  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.398447  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:18.398453  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:18.398527  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:18.424514  527777 cri.go:89] found id: ""
	I1201 21:15:18.424530  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.424537  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:18.424561  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:18.424641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:18.451718  527777 cri.go:89] found id: ""
	I1201 21:15:18.451732  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.451740  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:18.451746  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:18.451806  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:18.481779  527777 cri.go:89] found id: ""
	I1201 21:15:18.481804  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.481812  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:18.481818  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:18.481885  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:18.509744  527777 cri.go:89] found id: ""
	I1201 21:15:18.509760  527777 logs.go:282] 0 containers: []
	W1201 21:15:18.509767  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:18.509775  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:18.509800  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:18.541318  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:18.541335  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:18.608586  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:18.608608  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:18.625859  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:18.625885  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:18.721362  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:18.711891   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.712647   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.714432   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.715256   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:18.717230   12797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:18.721371  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:18.721383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.298842  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:21.309420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:21.309481  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:21.339650  527777 cri.go:89] found id: ""
	I1201 21:15:21.339664  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.339672  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:21.339678  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:21.339739  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:21.369828  527777 cri.go:89] found id: ""
	I1201 21:15:21.369843  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.369850  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:21.369857  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:21.369925  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:21.396833  527777 cri.go:89] found id: ""
	I1201 21:15:21.396860  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.396868  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:21.396874  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:21.396948  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:21.423340  527777 cri.go:89] found id: ""
	I1201 21:15:21.423354  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.423363  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:21.423369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:21.423429  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:21.450028  527777 cri.go:89] found id: ""
	I1201 21:15:21.450041  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.450051  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:21.450057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:21.450115  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:21.476290  527777 cri.go:89] found id: ""
	I1201 21:15:21.476305  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.476312  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:21.476317  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:21.476378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:21.503570  527777 cri.go:89] found id: ""
	I1201 21:15:21.503591  527777 logs.go:282] 0 containers: []
	W1201 21:15:21.503599  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:21.503607  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:21.503622  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:21.518970  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:21.518995  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:21.583522  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:21.575255   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.575783   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577341   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.577753   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:21.579360   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:21.583581  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:21.583592  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:21.662707  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:21.662730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:21.693467  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:21.693484  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.268299  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:24.279383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:24.279455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:24.305720  527777 cri.go:89] found id: ""
	I1201 21:15:24.305733  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.305741  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:24.305746  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:24.305809  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:24.333862  527777 cri.go:89] found id: ""
	I1201 21:15:24.333878  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.333885  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:24.333891  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:24.333965  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:24.365916  527777 cri.go:89] found id: ""
	I1201 21:15:24.365931  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.365939  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:24.365948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:24.366009  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:24.393185  527777 cri.go:89] found id: ""
	I1201 21:15:24.393202  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.393209  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:24.393216  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:24.393279  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:24.419532  527777 cri.go:89] found id: ""
	I1201 21:15:24.419547  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.419554  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:24.419560  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:24.419629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:24.445529  527777 cri.go:89] found id: ""
	I1201 21:15:24.445543  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.445550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:24.445557  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:24.445619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:24.470988  527777 cri.go:89] found id: ""
	I1201 21:15:24.471002  527777 logs.go:282] 0 containers: []
	W1201 21:15:24.471009  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:24.471017  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:24.471028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:24.500416  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:24.500433  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:24.566009  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:24.566028  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:24.582350  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:24.582366  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:24.653085  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:24.643454   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.643885   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645413   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.645743   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:24.647392   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:24.653095  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:24.653106  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:27.239323  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:27.250432  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:27.250495  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:27.276796  527777 cri.go:89] found id: ""
	I1201 21:15:27.276824  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.276832  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:27.276837  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:27.276927  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:27.303592  527777 cri.go:89] found id: ""
	I1201 21:15:27.303607  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.303614  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:27.303620  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:27.303685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:27.330141  527777 cri.go:89] found id: ""
	I1201 21:15:27.330155  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.330163  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:27.330168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:27.330231  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:27.358477  527777 cri.go:89] found id: ""
	I1201 21:15:27.358491  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.358498  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:27.358503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:27.358570  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:27.384519  527777 cri.go:89] found id: ""
	I1201 21:15:27.384533  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.384541  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:27.384547  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:27.384610  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:27.410788  527777 cri.go:89] found id: ""
	I1201 21:15:27.410804  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.410811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:27.410817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:27.410880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:27.437727  527777 cri.go:89] found id: ""
	I1201 21:15:27.437742  527777 logs.go:282] 0 containers: []
	W1201 21:15:27.437748  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:27.437756  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:27.437766  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:27.470359  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:27.470376  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:27.540219  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:27.540239  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:27.558165  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:27.558184  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:27.631990  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:27.624260   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.625006   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626587   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.626906   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:27.628425   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:27.632001  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:27.632013  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:30.214048  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:30.225906  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:30.225977  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:30.254528  527777 cri.go:89] found id: ""
	I1201 21:15:30.254544  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.254552  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:30.254559  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:30.254627  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:30.282356  527777 cri.go:89] found id: ""
	I1201 21:15:30.282371  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.282379  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:30.282385  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:30.282454  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:30.316244  527777 cri.go:89] found id: ""
	I1201 21:15:30.316266  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.316275  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:30.316281  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:30.316356  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:30.349310  527777 cri.go:89] found id: ""
	I1201 21:15:30.349324  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.349338  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:30.349345  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:30.349413  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:30.379233  527777 cri.go:89] found id: ""
	I1201 21:15:30.379259  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.379267  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:30.379273  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:30.379344  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:30.410578  527777 cri.go:89] found id: ""
	I1201 21:15:30.410592  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.410600  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:30.410607  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:30.410715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:30.439343  527777 cri.go:89] found id: ""
	I1201 21:15:30.439357  527777 logs.go:282] 0 containers: []
	W1201 21:15:30.439365  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:30.439373  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:30.439383  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:30.469722  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:30.469742  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:30.536977  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:30.536999  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:30.552719  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:30.552738  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:30.625200  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:30.616607   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.617292   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619213   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.619905   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:30.621438   13224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:30.625210  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:30.625221  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.202525  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:33.213081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:33.213144  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:33.239684  527777 cri.go:89] found id: ""
	I1201 21:15:33.239699  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.239707  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:33.239713  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:33.239777  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:33.270046  527777 cri.go:89] found id: ""
	I1201 21:15:33.270060  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.270067  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:33.270073  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:33.270134  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:33.298615  527777 cri.go:89] found id: ""
	I1201 21:15:33.298631  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.298639  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:33.298646  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:33.298715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:33.330389  527777 cri.go:89] found id: ""
	I1201 21:15:33.330403  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.330410  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:33.330416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:33.330472  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:33.356054  527777 cri.go:89] found id: ""
	I1201 21:15:33.356068  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.356075  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:33.356081  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:33.356147  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:33.385771  527777 cri.go:89] found id: ""
	I1201 21:15:33.385784  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.385792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:33.385797  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:33.385852  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:33.412562  527777 cri.go:89] found id: ""
	I1201 21:15:33.412580  527777 logs.go:282] 0 containers: []
	W1201 21:15:33.412587  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:33.412601  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:33.412616  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:33.478848  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:33.478868  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:33.494280  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:33.494296  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:33.574855  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:33.566973   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.567796   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569492   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.569806   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:33.571347   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:33.574866  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:33.574876  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:33.653087  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:33.653110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:36.198878  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:36.209291  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:36.209352  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:36.234666  527777 cri.go:89] found id: ""
	I1201 21:15:36.234679  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.234686  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:36.234691  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:36.234747  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:36.260740  527777 cri.go:89] found id: ""
	I1201 21:15:36.260754  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.260762  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:36.260767  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:36.260830  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:36.290674  527777 cri.go:89] found id: ""
	I1201 21:15:36.290688  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.290695  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:36.290700  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:36.290800  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:36.317381  527777 cri.go:89] found id: ""
	I1201 21:15:36.317396  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.317404  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:36.317410  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:36.317477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:36.346371  527777 cri.go:89] found id: ""
	I1201 21:15:36.346384  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.346391  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:36.346396  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:36.346458  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:36.374545  527777 cri.go:89] found id: ""
	I1201 21:15:36.374559  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.374567  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:36.374573  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:36.374632  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:36.400298  527777 cri.go:89] found id: ""
	I1201 21:15:36.400324  527777 logs.go:282] 0 containers: []
	W1201 21:15:36.400332  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:36.400339  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:36.400350  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:36.468826  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:36.468850  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:36.484335  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:36.484351  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:36.549841  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:36.541985   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.542492   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544187   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.544616   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:36.546198   13422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:36.549853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:36.549864  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:36.630562  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:36.630587  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:39.169136  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:39.182222  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:39.182296  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:39.212188  527777 cri.go:89] found id: ""
	I1201 21:15:39.212202  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.212208  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:39.212213  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:39.212270  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:39.237215  527777 cri.go:89] found id: ""
	I1201 21:15:39.237229  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.237236  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:39.237241  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:39.237298  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:39.262205  527777 cri.go:89] found id: ""
	I1201 21:15:39.262219  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.262226  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:39.262232  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:39.262288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:39.290471  527777 cri.go:89] found id: ""
	I1201 21:15:39.290485  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.290492  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:39.290498  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:39.290559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:39.316212  527777 cri.go:89] found id: ""
	I1201 21:15:39.316238  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.316245  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:39.316251  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:39.316329  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:39.341014  527777 cri.go:89] found id: ""
	I1201 21:15:39.341037  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.341045  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:39.341051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:39.341109  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:39.375032  527777 cri.go:89] found id: ""
	I1201 21:15:39.375058  527777 logs.go:282] 0 containers: []
	W1201 21:15:39.375067  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:39.375083  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:39.375093  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:39.447422  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:39.447444  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:39.462737  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:39.462754  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:39.534298  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:39.526942   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.527544   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.528601   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.529043   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:39.530634   13528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:39.534310  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:39.534320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:39.611187  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:39.611208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.146214  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:42.159004  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:42.159073  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:42.195922  527777 cri.go:89] found id: ""
	I1201 21:15:42.195938  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.195946  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:42.195952  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:42.196022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:42.230178  527777 cri.go:89] found id: ""
	I1201 21:15:42.230193  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.230200  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:42.230206  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:42.230271  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:42.261082  527777 cri.go:89] found id: ""
	I1201 21:15:42.261098  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.261105  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:42.261111  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:42.261188  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:42.295345  527777 cri.go:89] found id: ""
	I1201 21:15:42.295361  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.295377  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:42.295383  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:42.295457  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:42.330093  527777 cri.go:89] found id: ""
	I1201 21:15:42.330109  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.330116  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:42.330122  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:42.330186  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:42.358733  527777 cri.go:89] found id: ""
	I1201 21:15:42.358748  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.358756  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:42.358761  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:42.358823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:42.388218  527777 cri.go:89] found id: ""
	I1201 21:15:42.388233  527777 logs.go:282] 0 containers: []
	W1201 21:15:42.388240  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:42.388247  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:42.388258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:42.469165  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:42.469185  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:42.500328  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:42.500345  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:42.569622  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:42.569642  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:42.585628  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:42.585645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:42.654077  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:42.643924   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.644658   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.646844   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.647501   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:42.648880   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.155990  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:45.177587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:45.177664  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:45.216123  527777 cri.go:89] found id: ""
	I1201 21:15:45.216141  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.216149  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:45.216155  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:45.216241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:45.257016  527777 cri.go:89] found id: ""
	I1201 21:15:45.257036  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.257044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:45.257053  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:45.257139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:45.310072  527777 cri.go:89] found id: ""
	I1201 21:15:45.310087  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.310095  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:45.310101  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:45.310165  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:45.339040  527777 cri.go:89] found id: ""
	I1201 21:15:45.339054  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.339062  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:45.339068  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:45.339154  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:45.370340  527777 cri.go:89] found id: ""
	I1201 21:15:45.370354  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.370361  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:45.370366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:45.370426  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:45.396213  527777 cri.go:89] found id: ""
	I1201 21:15:45.396227  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.396234  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:45.396240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:45.396299  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:45.423726  527777 cri.go:89] found id: ""
	I1201 21:15:45.423745  527777 logs.go:282] 0 containers: []
	W1201 21:15:45.423755  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:45.423773  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:45.423784  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:45.490150  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:45.481612   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.482336   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.483955   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.484544   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:45.486132   13736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:45.490161  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:45.490172  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:45.565908  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:45.565926  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:45.598740  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:45.598755  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:45.666263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:45.666281  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.183348  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:48.193996  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:48.194068  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:48.221096  527777 cri.go:89] found id: ""
	I1201 21:15:48.221110  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.221117  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:48.221123  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:48.221180  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:48.247305  527777 cri.go:89] found id: ""
	I1201 21:15:48.247320  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.247328  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:48.247333  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:48.247392  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:48.277432  527777 cri.go:89] found id: ""
	I1201 21:15:48.277447  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.277453  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:48.277459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:48.277521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:48.304618  527777 cri.go:89] found id: ""
	I1201 21:15:48.304636  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.304643  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:48.304649  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:48.304712  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:48.331672  527777 cri.go:89] found id: ""
	I1201 21:15:48.331686  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.331694  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:48.331699  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:48.331757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:48.360554  527777 cri.go:89] found id: ""
	I1201 21:15:48.360569  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.360577  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:48.360583  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:48.360640  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:48.385002  527777 cri.go:89] found id: ""
	I1201 21:15:48.385016  527777 logs.go:282] 0 containers: []
	W1201 21:15:48.385023  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:48.385032  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:48.385043  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:48.414019  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:48.414036  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:48.479945  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:48.479964  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:48.495187  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:48.495206  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:48.560181  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:48.550756   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.551438   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.553149   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.554808   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:48.555445   13857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:48.560191  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:48.560203  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.136751  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:51.147836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:51.147914  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:51.178020  527777 cri.go:89] found id: ""
	I1201 21:15:51.178033  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.178041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:51.178046  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:51.178106  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:51.206023  527777 cri.go:89] found id: ""
	I1201 21:15:51.206036  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.206044  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:51.206049  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:51.206150  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:51.236344  527777 cri.go:89] found id: ""
	I1201 21:15:51.236359  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.236366  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:51.236371  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:51.236434  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:51.262331  527777 cri.go:89] found id: ""
	I1201 21:15:51.262346  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.262353  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:51.262359  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:51.262419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:51.290923  527777 cri.go:89] found id: ""
	I1201 21:15:51.290936  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.290944  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:51.290949  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:51.291016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:51.318520  527777 cri.go:89] found id: ""
	I1201 21:15:51.318535  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.318542  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:51.318548  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:51.318607  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:51.345816  527777 cri.go:89] found id: ""
	I1201 21:15:51.345830  527777 logs.go:282] 0 containers: []
	W1201 21:15:51.345837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:51.345845  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:51.345857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:51.361084  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:51.361100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:51.427299  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:51.418365   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.419193   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.420874   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.421545   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:51.423332   13951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:51.427309  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:51.427320  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:51.502906  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:51.502929  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:51.533675  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:51.533691  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.100640  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:54.111984  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:54.112047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:54.137333  527777 cri.go:89] found id: ""
	I1201 21:15:54.137347  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.137353  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:54.137360  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:54.137419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:54.166609  527777 cri.go:89] found id: ""
	I1201 21:15:54.166624  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.166635  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:54.166640  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:54.166705  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:54.193412  527777 cri.go:89] found id: ""
	I1201 21:15:54.193434  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.193441  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:54.193447  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:54.193509  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:54.219156  527777 cri.go:89] found id: ""
	I1201 21:15:54.219171  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.219178  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:54.219184  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:54.219241  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:54.248184  527777 cri.go:89] found id: ""
	I1201 21:15:54.248197  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.248204  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:54.248210  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:54.248278  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:54.274909  527777 cri.go:89] found id: ""
	I1201 21:15:54.274923  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.274931  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:54.274936  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:54.275003  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:54.300114  527777 cri.go:89] found id: ""
	I1201 21:15:54.300128  527777 logs.go:282] 0 containers: []
	W1201 21:15:54.300135  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:54.300143  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:54.300154  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:54.366293  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:54.366312  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:54.382194  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:54.382210  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:54.446526  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:54.438379   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.439169   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.440693   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.441226   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:54.442826   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:54.446536  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:54.446548  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:15:54.525097  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:54.525120  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.056605  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:15:57.067114  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:15:57.067185  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:15:57.096913  527777 cri.go:89] found id: ""
	I1201 21:15:57.096926  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.096933  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:15:57.096939  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:15:57.096995  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:15:57.124785  527777 cri.go:89] found id: ""
	I1201 21:15:57.124799  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.124806  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:15:57.124812  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:15:57.124877  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:15:57.151613  527777 cri.go:89] found id: ""
	I1201 21:15:57.151628  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.151635  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:15:57.151640  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:15:57.151702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:15:57.181422  527777 cri.go:89] found id: ""
	I1201 21:15:57.181437  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.181445  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:15:57.181451  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:15:57.181510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:15:57.207775  527777 cri.go:89] found id: ""
	I1201 21:15:57.207789  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.207796  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:15:57.207801  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:15:57.207861  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:15:57.232906  527777 cri.go:89] found id: ""
	I1201 21:15:57.232931  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.232939  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:15:57.232945  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:15:57.233016  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:15:57.259075  527777 cri.go:89] found id: ""
	I1201 21:15:57.259100  527777 logs.go:282] 0 containers: []
	W1201 21:15:57.259107  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:15:57.259115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:15:57.259126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:15:57.288148  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:15:57.288164  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:15:57.355525  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:15:57.355545  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:15:57.371229  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:15:57.371246  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:15:57.439767  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:15:57.431231   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.431971   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.433692   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.434306   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:15:57.436090   14180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:15:57.439779  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:15:57.439791  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.016574  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:00.063670  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:00.063743  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:00.181922  527777 cri.go:89] found id: ""
	I1201 21:16:00.181939  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.181947  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:00.181954  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:00.183169  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:00.318653  527777 cri.go:89] found id: ""
	I1201 21:16:00.318668  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.318676  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:00.318682  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:00.318752  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:00.366365  527777 cri.go:89] found id: ""
	I1201 21:16:00.366381  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.366391  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:00.366398  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:00.366497  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:00.432333  527777 cri.go:89] found id: ""
	I1201 21:16:00.432349  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.432358  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:00.432364  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:00.432436  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:00.487199  527777 cri.go:89] found id: ""
	I1201 21:16:00.487216  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.487238  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:00.487244  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:00.487315  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:00.541398  527777 cri.go:89] found id: ""
	I1201 21:16:00.541429  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.541438  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:00.541444  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:00.541530  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:00.577064  527777 cri.go:89] found id: ""
	I1201 21:16:00.577082  527777 logs.go:282] 0 containers: []
	W1201 21:16:00.577095  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:00.577103  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:00.577116  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:00.646395  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:00.646418  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:00.667724  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:00.667741  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:00.750849  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:00.742119   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.743012   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.744823   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.745562   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:00.747124   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:00.750860  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:00.750872  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:00.828858  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:00.828881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.360481  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:03.371537  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:03.371611  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:03.401359  527777 cri.go:89] found id: ""
	I1201 21:16:03.401373  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.401380  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:03.401385  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:03.401452  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:03.428335  527777 cri.go:89] found id: ""
	I1201 21:16:03.428350  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.428358  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:03.428363  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:03.428424  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:03.460610  527777 cri.go:89] found id: ""
	I1201 21:16:03.460623  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.460630  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:03.460636  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:03.460695  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:03.489139  527777 cri.go:89] found id: ""
	I1201 21:16:03.489153  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.489161  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:03.489168  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:03.489234  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:03.519388  527777 cri.go:89] found id: ""
	I1201 21:16:03.519410  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.519418  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:03.519423  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:03.519490  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:03.549588  527777 cri.go:89] found id: ""
	I1201 21:16:03.549602  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.549610  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:03.549615  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:03.549678  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:03.576025  527777 cri.go:89] found id: ""
	I1201 21:16:03.576039  527777 logs.go:282] 0 containers: []
	W1201 21:16:03.576047  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:03.576055  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:03.576066  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:03.605415  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:03.605431  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:03.675775  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:03.675797  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:03.691777  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:03.691793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:03.765238  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:03.755858   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.756644   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.758434   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.759088   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:03.760930   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:03.765250  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:03.765263  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.346338  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:06.356267  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:06.356325  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:06.380678  527777 cri.go:89] found id: ""
	I1201 21:16:06.380691  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.380717  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:06.380723  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:06.380780  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:06.410489  527777 cri.go:89] found id: ""
	I1201 21:16:06.410503  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.410518  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:06.410524  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:06.410588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:06.443231  527777 cri.go:89] found id: ""
	I1201 21:16:06.443250  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.443257  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:06.443263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:06.443334  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:06.468603  527777 cri.go:89] found id: ""
	I1201 21:16:06.468618  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.468625  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:06.468631  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:06.468700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:06.493128  527777 cri.go:89] found id: ""
	I1201 21:16:06.493141  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.493148  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:06.493154  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:06.493212  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:06.518860  527777 cri.go:89] found id: ""
	I1201 21:16:06.518874  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.518881  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:06.518886  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:06.518958  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:06.545817  527777 cri.go:89] found id: ""
	I1201 21:16:06.545831  527777 logs.go:282] 0 containers: []
	W1201 21:16:06.545839  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:06.545846  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:06.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:06.610356  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:06.610378  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:06.625472  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:06.625488  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:06.722623  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:06.711338   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.712429   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.713404   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.714175   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:06.716915   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:06.722633  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:06.722648  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:06.798208  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:06.798228  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.328391  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:09.339639  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:09.339706  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:09.368398  527777 cri.go:89] found id: ""
	I1201 21:16:09.368421  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.368428  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:09.368434  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:09.368512  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:09.398525  527777 cri.go:89] found id: ""
	I1201 21:16:09.398540  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.398548  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:09.398553  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:09.398615  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:09.426105  527777 cri.go:89] found id: ""
	I1201 21:16:09.426121  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.426129  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:09.426145  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:09.426205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:09.456433  527777 cri.go:89] found id: ""
	I1201 21:16:09.456449  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.456456  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:09.456462  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:09.456525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:09.488473  527777 cri.go:89] found id: ""
	I1201 21:16:09.488488  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.488495  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:09.488503  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:09.488563  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:09.514937  527777 cri.go:89] found id: ""
	I1201 21:16:09.514951  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.514958  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:09.514964  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:09.515027  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:09.545815  527777 cri.go:89] found id: ""
	I1201 21:16:09.545829  527777 logs.go:282] 0 containers: []
	W1201 21:16:09.545837  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:09.545845  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:09.545857  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:09.575097  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:09.575115  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:09.642216  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:09.642237  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:09.663629  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:09.663645  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:09.745863  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:09.737300   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.737977   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.739598   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.740167   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:09.741918   14605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:09.745876  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:09.745888  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.327853  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:12.338928  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:12.338992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:12.372550  527777 cri.go:89] found id: ""
	I1201 21:16:12.372583  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.372591  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:12.372597  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:12.372662  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:12.402760  527777 cri.go:89] found id: ""
	I1201 21:16:12.402776  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.402784  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:12.402790  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:12.402851  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:12.429193  527777 cri.go:89] found id: ""
	I1201 21:16:12.429208  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.429215  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:12.429221  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:12.429286  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:12.456952  527777 cri.go:89] found id: ""
	I1201 21:16:12.456966  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.456973  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:12.456978  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:12.457037  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:12.483859  527777 cri.go:89] found id: ""
	I1201 21:16:12.483874  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.483881  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:12.483887  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:12.483950  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:12.510218  527777 cri.go:89] found id: ""
	I1201 21:16:12.510234  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.510242  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:12.510248  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:12.510323  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:12.536841  527777 cri.go:89] found id: ""
	I1201 21:16:12.536856  527777 logs.go:282] 0 containers: []
	W1201 21:16:12.536864  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:12.536871  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:12.536881  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:12.612682  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:12.612702  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:12.641218  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:12.641235  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:12.719908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:12.719930  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:12.736058  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:12.736077  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:12.803643  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:12.795056   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.795699   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.797375   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.798039   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:12.799685   14714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.304417  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:15.314647  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:15.314707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:15.342468  527777 cri.go:89] found id: ""
	I1201 21:16:15.342483  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.342491  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:15.342497  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:15.342559  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:15.369048  527777 cri.go:89] found id: ""
	I1201 21:16:15.369063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.369071  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:15.369077  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:15.369140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:15.393869  527777 cri.go:89] found id: ""
	I1201 21:16:15.393884  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.393891  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:15.393897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:15.393960  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:15.420049  527777 cri.go:89] found id: ""
	I1201 21:16:15.420063  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.420071  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:15.420077  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:15.420136  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:15.450112  527777 cri.go:89] found id: ""
	I1201 21:16:15.450126  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.450134  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:15.450140  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:15.450201  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:15.475788  527777 cri.go:89] found id: ""
	I1201 21:16:15.475803  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.475811  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:15.475884  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:15.502058  527777 cri.go:89] found id: ""
	I1201 21:16:15.502072  527777 logs.go:282] 0 containers: []
	W1201 21:16:15.502084  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:15.502092  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:15.502102  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:15.535936  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:15.535953  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:15.601548  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:15.601568  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:15.617150  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:15.617167  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:15.694491  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:15.683261   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.684161   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.685978   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.686544   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:15.688226   14810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:15.694502  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:15.694514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.282089  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:18.292620  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:18.292687  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:18.320483  527777 cri.go:89] found id: ""
	I1201 21:16:18.320497  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.320504  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:18.320510  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:18.320569  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:18.346376  527777 cri.go:89] found id: ""
	I1201 21:16:18.346389  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.346397  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:18.346402  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:18.346459  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:18.377534  527777 cri.go:89] found id: ""
	I1201 21:16:18.377549  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.377557  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:18.377562  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:18.377619  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:18.402867  527777 cri.go:89] found id: ""
	I1201 21:16:18.402882  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.402892  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:18.402897  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:18.402952  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:18.429104  527777 cri.go:89] found id: ""
	I1201 21:16:18.429119  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.429126  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:18.429132  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:18.429193  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:18.455237  527777 cri.go:89] found id: ""
	I1201 21:16:18.455251  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.455257  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:18.455263  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:18.455330  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:18.480176  527777 cri.go:89] found id: ""
	I1201 21:16:18.480190  527777 logs.go:282] 0 containers: []
	W1201 21:16:18.480197  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:18.480205  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:18.480215  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:18.554692  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:18.554713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:18.586044  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:18.586062  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:18.654056  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:18.654076  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:18.670115  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:18.670131  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:18.739729  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:18.731971   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.732738   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734274   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.734737   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:18.736253   14925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.240925  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:21.251332  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:21.251400  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:21.277213  527777 cri.go:89] found id: ""
	I1201 21:16:21.277228  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.277266  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:21.277275  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:21.277349  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:21.304294  527777 cri.go:89] found id: ""
	I1201 21:16:21.304308  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.304316  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:21.304321  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:21.304393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:21.331354  527777 cri.go:89] found id: ""
	I1201 21:16:21.331369  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.331377  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:21.331382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:21.331455  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:21.358548  527777 cri.go:89] found id: ""
	I1201 21:16:21.358563  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.358571  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:21.358577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:21.358637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:21.384228  527777 cri.go:89] found id: ""
	I1201 21:16:21.384242  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.384250  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:21.384255  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:21.384321  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:21.413560  527777 cri.go:89] found id: ""
	I1201 21:16:21.413574  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.413581  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:21.413587  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:21.413647  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:21.439790  527777 cri.go:89] found id: ""
	I1201 21:16:21.439805  527777 logs.go:282] 0 containers: []
	W1201 21:16:21.439813  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:21.439821  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:21.439839  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:21.505587  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:21.505607  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:21.522038  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:21.522064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:21.590692  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:21.582084   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.583389   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.584091   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585517   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:21.585879   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:21.590718  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:21.590730  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:21.667703  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:21.667727  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.203209  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:24.214159  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:24.214230  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:24.242378  527777 cri.go:89] found id: ""
	I1201 21:16:24.242392  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.242399  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:24.242405  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:24.242486  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:24.269017  527777 cri.go:89] found id: ""
	I1201 21:16:24.269032  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.269039  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:24.269045  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:24.269103  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:24.295927  527777 cri.go:89] found id: ""
	I1201 21:16:24.295942  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.295949  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:24.295955  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:24.296019  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:24.321917  527777 cri.go:89] found id: ""
	I1201 21:16:24.321932  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.321939  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:24.321944  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:24.322012  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:24.350147  527777 cri.go:89] found id: ""
	I1201 21:16:24.350163  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.350171  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:24.350177  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:24.350250  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:24.376131  527777 cri.go:89] found id: ""
	I1201 21:16:24.376145  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.376153  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:24.376160  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:24.376220  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:24.403024  527777 cri.go:89] found id: ""
	I1201 21:16:24.403039  527777 logs.go:282] 0 containers: []
	W1201 21:16:24.403046  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:24.403055  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:24.403068  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:24.418212  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:24.418230  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:24.486448  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:24.478347   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.478999   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.480897   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.481565   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:24.482855   15113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:24.486460  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:24.486472  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:24.563285  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:24.563307  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:24.597003  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:24.597023  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.167466  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:27.179061  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:27.179139  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:27.210380  527777 cri.go:89] found id: ""
	I1201 21:16:27.210394  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.210402  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:27.210409  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:27.210474  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:27.238732  527777 cri.go:89] found id: ""
	I1201 21:16:27.238747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.238754  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:27.238760  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:27.238827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:27.265636  527777 cri.go:89] found id: ""
	I1201 21:16:27.265652  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.265661  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:27.265667  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:27.265736  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:27.292213  527777 cri.go:89] found id: ""
	I1201 21:16:27.292228  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.292235  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:27.292241  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:27.292300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:27.324732  527777 cri.go:89] found id: ""
	I1201 21:16:27.324747  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.324755  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:27.324762  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:27.324827  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:27.352484  527777 cri.go:89] found id: ""
	I1201 21:16:27.352499  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.352507  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:27.352513  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:27.352590  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:27.384113  527777 cri.go:89] found id: ""
	I1201 21:16:27.384128  527777 logs.go:282] 0 containers: []
	W1201 21:16:27.384136  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:27.384144  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:27.384155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:27.415615  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:27.415634  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:27.482296  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:27.482319  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:27.498829  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:27.498846  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:27.569732  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:27.560441   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.561149   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.563083   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.564057   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:27.565939   15229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:27.569744  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:27.569757  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.145371  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:30.156840  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:30.156922  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:30.184704  527777 cri.go:89] found id: ""
	I1201 21:16:30.184719  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.184727  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:30.184733  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:30.184795  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:30.213086  527777 cri.go:89] found id: ""
	I1201 21:16:30.213110  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.213120  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:30.213125  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:30.213192  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:30.245472  527777 cri.go:89] found id: ""
	I1201 21:16:30.245486  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.245494  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:30.245499  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:30.245565  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:30.273463  527777 cri.go:89] found id: ""
	I1201 21:16:30.273477  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.273485  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:30.273491  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:30.273557  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:30.302141  527777 cri.go:89] found id: ""
	I1201 21:16:30.302156  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.302164  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:30.302170  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:30.302232  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:30.329744  527777 cri.go:89] found id: ""
	I1201 21:16:30.329758  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.329765  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:30.329771  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:30.329833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:30.356049  527777 cri.go:89] found id: ""
	I1201 21:16:30.356063  527777 logs.go:282] 0 containers: []
	W1201 21:16:30.356071  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:30.356079  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:30.356110  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:30.424124  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:30.415484   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.416264   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.417932   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.418545   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:30.420321   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:30.424134  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:30.424145  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:30.498989  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:30.499009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:30.536189  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:30.536208  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:30.601111  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:30.601130  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.116248  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:33.129790  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:33.129876  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:33.162072  527777 cri.go:89] found id: ""
	I1201 21:16:33.162085  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.162093  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:33.162098  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:33.162168  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:33.188853  527777 cri.go:89] found id: ""
	I1201 21:16:33.188868  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.188875  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:33.188881  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:33.188944  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:33.215527  527777 cri.go:89] found id: ""
	I1201 21:16:33.215541  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.215548  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:33.215554  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:33.215613  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:33.241336  527777 cri.go:89] found id: ""
	I1201 21:16:33.241350  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.241357  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:33.241363  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:33.241422  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:33.267551  527777 cri.go:89] found id: ""
	I1201 21:16:33.267564  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.267571  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:33.267576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:33.267639  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:33.293257  527777 cri.go:89] found id: ""
	I1201 21:16:33.293273  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.293280  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:33.293286  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:33.293346  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:33.324702  527777 cri.go:89] found id: ""
	I1201 21:16:33.324717  527777 logs.go:282] 0 containers: []
	W1201 21:16:33.324725  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:33.324733  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:33.324745  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:33.393448  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:33.393473  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:33.409048  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:33.409075  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:33.473709  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:33.465395   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.465779   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.467541   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.468183   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:33.469632   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:33.473720  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:33.473731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:33.549174  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:33.549194  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:36.083124  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:36.093860  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:36.093919  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:36.122911  527777 cri.go:89] found id: ""
	I1201 21:16:36.122925  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.122932  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:36.122938  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:36.123000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:36.148002  527777 cri.go:89] found id: ""
	I1201 21:16:36.148016  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.148023  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:36.148028  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:36.148088  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:36.173008  527777 cri.go:89] found id: ""
	I1201 21:16:36.173022  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.173029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:36.173034  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:36.173092  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:36.198828  527777 cri.go:89] found id: ""
	I1201 21:16:36.198841  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.198848  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:36.198854  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:36.198909  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:36.224001  527777 cri.go:89] found id: ""
	I1201 21:16:36.224015  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.224022  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:36.224027  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:36.224085  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:36.249054  527777 cri.go:89] found id: ""
	I1201 21:16:36.249068  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.249075  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:36.249080  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:36.249140  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:36.273000  527777 cri.go:89] found id: ""
	I1201 21:16:36.273014  527777 logs.go:282] 0 containers: []
	W1201 21:16:36.273021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:36.273029  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:36.273039  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:36.337502  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:36.337521  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:36.353315  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:36.353331  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:36.424612  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:36.416389   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.416852   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.418267   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.419034   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:36.420807   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:36.424623  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:36.424633  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:36.503070  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:36.503100  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:39.034568  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:39.045696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:39.045760  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:39.071542  527777 cri.go:89] found id: ""
	I1201 21:16:39.071555  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.071563  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:39.071569  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:39.071630  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:39.102301  527777 cri.go:89] found id: ""
	I1201 21:16:39.102315  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.102322  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:39.102328  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:39.102384  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:39.129808  527777 cri.go:89] found id: ""
	I1201 21:16:39.129823  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.129830  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:39.129836  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:39.129895  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:39.155555  527777 cri.go:89] found id: ""
	I1201 21:16:39.155569  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.155576  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:39.155582  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:39.155650  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:39.186394  527777 cri.go:89] found id: ""
	I1201 21:16:39.186408  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.186415  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:39.186420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:39.186485  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:39.213875  527777 cri.go:89] found id: ""
	I1201 21:16:39.213889  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.213896  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:39.213901  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:39.213957  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:39.243609  527777 cri.go:89] found id: ""
	I1201 21:16:39.243623  527777 logs.go:282] 0 containers: []
	W1201 21:16:39.243631  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:39.243640  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:39.243652  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:39.307878  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:39.307897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:39.322972  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:39.322989  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:39.391843  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:39.383574   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.384012   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385493   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.385831   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:39.387179   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:39.391853  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:39.391869  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:39.471894  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:39.471915  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.007008  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:42.029520  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:42.029588  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:42.057505  527777 cri.go:89] found id: ""
	I1201 21:16:42.057520  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.057528  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:42.057534  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:42.057598  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:42.097060  527777 cri.go:89] found id: ""
	I1201 21:16:42.097086  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.097094  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:42.097100  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:42.097191  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:42.136029  527777 cri.go:89] found id: ""
	I1201 21:16:42.136048  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.136058  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:42.136064  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:42.136155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:42.183711  527777 cri.go:89] found id: ""
	I1201 21:16:42.183733  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.183743  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:42.183750  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:42.183825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:42.219282  527777 cri.go:89] found id: ""
	I1201 21:16:42.219298  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.219320  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:42.219326  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:42.219393  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:42.248969  527777 cri.go:89] found id: ""
	I1201 21:16:42.248986  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.248994  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:42.249005  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:42.249079  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:42.283438  527777 cri.go:89] found id: ""
	I1201 21:16:42.283452  527777 logs.go:282] 0 containers: []
	W1201 21:16:42.283459  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:42.283467  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:42.283479  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:42.355657  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:42.347226   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.347801   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349475   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.349945   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:42.351044   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:42.355675  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:42.355686  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:42.432138  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:42.432158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:42.466460  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:42.466475  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:42.532633  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:42.532653  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.050487  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:45.077310  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:45.077404  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:45.125431  527777 cri.go:89] found id: ""
	I1201 21:16:45.125455  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.125463  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:45.125469  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:45.125541  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:45.159113  527777 cri.go:89] found id: ""
	I1201 21:16:45.159151  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.159161  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:45.159167  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:45.159238  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:45.205059  527777 cri.go:89] found id: ""
	I1201 21:16:45.205075  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.205084  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:45.205092  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:45.205213  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:45.256952  527777 cri.go:89] found id: ""
	I1201 21:16:45.257035  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.257044  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:45.257051  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:45.257244  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:45.299953  527777 cri.go:89] found id: ""
	I1201 21:16:45.299967  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.299975  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:45.299981  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:45.300047  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:45.334546  527777 cri.go:89] found id: ""
	I1201 21:16:45.334562  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.334570  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:45.334576  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:45.334641  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:45.366922  527777 cri.go:89] found id: ""
	I1201 21:16:45.366936  527777 logs.go:282] 0 containers: []
	W1201 21:16:45.366944  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:45.366952  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:45.366973  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:45.384985  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:45.385003  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:45.455424  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:45.445999   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.446779   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.448616   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.449343   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:45.450996   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:45.455434  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:45.455446  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:45.532668  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:45.532689  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:45.572075  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:45.572092  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.147493  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:48.158252  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:48.158331  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:48.185671  527777 cri.go:89] found id: ""
	I1201 21:16:48.185685  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.185692  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:48.185697  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:48.185766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:48.211977  527777 cri.go:89] found id: ""
	I1201 21:16:48.211991  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.211998  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:48.212003  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:48.212059  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:48.238605  527777 cri.go:89] found id: ""
	I1201 21:16:48.238620  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.238627  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:48.238632  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:48.238691  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:48.272407  527777 cri.go:89] found id: ""
	I1201 21:16:48.272421  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.272428  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:48.272433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:48.272491  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:48.300451  527777 cri.go:89] found id: ""
	I1201 21:16:48.300465  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.300472  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:48.300478  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:48.300543  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:48.326518  527777 cri.go:89] found id: ""
	I1201 21:16:48.326542  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.326550  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:48.326555  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:48.326629  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:48.353027  527777 cri.go:89] found id: ""
	I1201 21:16:48.353043  527777 logs.go:282] 0 containers: []
	W1201 21:16:48.353050  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:48.353059  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:48.353070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:48.418908  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:48.418928  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:48.435338  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:48.435358  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:48.502670  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:48.494115   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.494749   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.496453   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.497013   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:48.498610   15958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:48.502708  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:48.502718  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:48.579198  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:48.579219  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.111632  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:51.122895  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:51.122970  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:51.149845  527777 cri.go:89] found id: ""
	I1201 21:16:51.149859  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.149867  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:51.149872  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:51.149937  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:51.182385  527777 cri.go:89] found id: ""
	I1201 21:16:51.182399  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.182406  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:51.182411  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:51.182473  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:51.207954  527777 cri.go:89] found id: ""
	I1201 21:16:51.207967  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.208015  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:51.208024  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:51.208080  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:51.233058  527777 cri.go:89] found id: ""
	I1201 21:16:51.233071  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.233077  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:51.233083  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:51.233146  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:51.259105  527777 cri.go:89] found id: ""
	I1201 21:16:51.259119  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.259127  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:51.259147  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:51.259205  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:51.284870  527777 cri.go:89] found id: ""
	I1201 21:16:51.284884  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.284891  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:51.284896  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:51.284953  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:51.312084  527777 cri.go:89] found id: ""
	I1201 21:16:51.312099  527777 logs.go:282] 0 containers: []
	W1201 21:16:51.312106  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:51.312115  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:51.312126  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:51.342115  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:51.342134  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:51.408816  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:51.408836  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:51.425032  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:51.425054  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:51.494088  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:51.485702   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.486261   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.487911   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.488439   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:51.489973   16072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:51.494097  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:51.494107  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.070393  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:54.082393  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:54.082464  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:54.112007  527777 cri.go:89] found id: ""
	I1201 21:16:54.112033  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.112041  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:54.112048  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:54.112120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:54.142629  527777 cri.go:89] found id: ""
	I1201 21:16:54.142643  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.142650  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:54.142656  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:54.142715  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:54.170596  527777 cri.go:89] found id: ""
	I1201 21:16:54.170611  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.170618  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:54.170623  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:54.170685  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:54.199276  527777 cri.go:89] found id: ""
	I1201 21:16:54.199301  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.199309  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:54.199314  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:54.199385  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:54.229268  527777 cri.go:89] found id: ""
	I1201 21:16:54.229285  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.229294  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:54.229300  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:54.229378  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:54.261273  527777 cri.go:89] found id: ""
	I1201 21:16:54.261289  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.261298  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:54.261306  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:54.261409  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:54.289154  527777 cri.go:89] found id: ""
	I1201 21:16:54.289169  527777 logs.go:282] 0 containers: []
	W1201 21:16:54.289189  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:54.289199  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:54.289216  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:54.363048  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:54.355149   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.356097   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.357711   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.358323   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:54.359471   16162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:54.363059  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:54.363070  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:54.440875  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:54.440897  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:16:54.471338  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:54.471355  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:54.543810  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:54.543830  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.061388  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:16:57.071929  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:16:57.071998  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:16:57.102516  527777 cri.go:89] found id: ""
	I1201 21:16:57.102531  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.102540  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:16:57.102546  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:16:57.102614  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:16:57.129734  527777 cri.go:89] found id: ""
	I1201 21:16:57.129749  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.129756  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:16:57.129761  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:16:57.129825  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:16:57.160948  527777 cri.go:89] found id: ""
	I1201 21:16:57.160962  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.160971  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:16:57.160977  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:16:57.161049  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:16:57.192059  527777 cri.go:89] found id: ""
	I1201 21:16:57.192075  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.192082  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:16:57.192088  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:16:57.192155  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:16:57.217906  527777 cri.go:89] found id: ""
	I1201 21:16:57.217920  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.217927  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:16:57.217932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:16:57.217992  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:16:57.246391  527777 cri.go:89] found id: ""
	I1201 21:16:57.246406  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.246414  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:16:57.246420  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:16:57.246480  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:16:57.273534  527777 cri.go:89] found id: ""
	I1201 21:16:57.273558  527777 logs.go:282] 0 containers: []
	W1201 21:16:57.273565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:16:57.273573  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:16:57.273585  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:16:57.338589  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:16:57.338609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:16:57.354225  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:16:57.354241  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:16:57.425192  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:16:57.416917   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.417985   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419291   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.419806   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:16:57.421427   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:16:57.425202  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:16:57.425213  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:16:57.501690  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:16:57.501713  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:00.031846  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:00.071974  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:00.072071  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:00.158888  527777 cri.go:89] found id: ""
	I1201 21:17:00.158904  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.158912  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:00.158918  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:00.158994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:00.267283  527777 cri.go:89] found id: ""
	I1201 21:17:00.267299  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.267306  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:00.267312  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:00.267395  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:00.331710  527777 cri.go:89] found id: ""
	I1201 21:17:00.331725  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.331733  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:00.331740  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:00.331821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:00.416435  527777 cri.go:89] found id: ""
	I1201 21:17:00.416468  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.416476  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:00.416482  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:00.416566  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:00.456878  527777 cri.go:89] found id: ""
	I1201 21:17:00.456894  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.456904  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:00.456909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:00.456979  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:00.511096  527777 cri.go:89] found id: ""
	I1201 21:17:00.511113  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.511122  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:00.511166  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:00.511245  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:00.565444  527777 cri.go:89] found id: ""
	I1201 21:17:00.565463  527777 logs.go:282] 0 containers: []
	W1201 21:17:00.565471  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:00.565480  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:00.565498  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:00.641086  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:00.641121  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:00.662045  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:00.662064  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:00.750234  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:00.740709   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.741500   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.743472   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.744204   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:00.745911   16381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:00.750246  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:00.750258  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:00.828511  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:00.828539  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:03.366405  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:03.379053  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:03.379127  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:03.412977  527777 cri.go:89] found id: ""
	I1201 21:17:03.412991  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.412999  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:03.413005  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:03.413074  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:03.442789  527777 cri.go:89] found id: ""
	I1201 21:17:03.442817  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.442827  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:03.442834  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:03.442956  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:03.472731  527777 cri.go:89] found id: ""
	I1201 21:17:03.472758  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.472767  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:03.472772  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:03.472843  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:03.503719  527777 cri.go:89] found id: ""
	I1201 21:17:03.503735  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.503744  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:03.503751  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:03.503823  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:03.533642  527777 cri.go:89] found id: ""
	I1201 21:17:03.533658  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.533665  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:03.533671  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:03.533749  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:03.562889  527777 cri.go:89] found id: ""
	I1201 21:17:03.562908  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.562916  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:03.562922  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:03.563006  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:03.592257  527777 cri.go:89] found id: ""
	I1201 21:17:03.592275  527777 logs.go:282] 0 containers: []
	W1201 21:17:03.592283  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:03.592291  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:03.592303  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:03.660263  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:03.660282  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:03.683357  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:03.683375  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:03.765695  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:03.755989   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.757040   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758018   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.758825   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:03.760781   16487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:03.765707  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:03.765719  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:03.842543  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:03.842567  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.376185  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:06.387932  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:06.388000  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:06.417036  527777 cri.go:89] found id: ""
	I1201 21:17:06.417050  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.417058  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:06.417064  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:06.417125  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:06.447064  527777 cri.go:89] found id: ""
	I1201 21:17:06.447090  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.447098  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:06.447104  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:06.447207  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:06.476879  527777 cri.go:89] found id: ""
	I1201 21:17:06.476893  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.476900  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:06.476905  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:06.476968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:06.506320  527777 cri.go:89] found id: ""
	I1201 21:17:06.506338  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.506346  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:06.506352  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:06.506419  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:06.535420  527777 cri.go:89] found id: ""
	I1201 21:17:06.535443  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.535451  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:06.535458  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:06.535525  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:06.563751  527777 cri.go:89] found id: ""
	I1201 21:17:06.563784  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.563792  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:06.563798  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:06.563865  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:06.597779  527777 cri.go:89] found id: ""
	I1201 21:17:06.597795  527777 logs.go:282] 0 containers: []
	W1201 21:17:06.597803  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:06.597811  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:06.597823  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:06.681458  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:06.672535   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.673200   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.674869   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.675413   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:06.677204   16582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:06.681470  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:06.681482  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:06.778343  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:06.778369  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:06.812835  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:06.812854  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:06.886097  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:06.886123  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.404611  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:09.415307  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:09.415386  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:09.454145  527777 cri.go:89] found id: ""
	I1201 21:17:09.454159  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.454168  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:09.454174  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:09.454240  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:09.483869  527777 cri.go:89] found id: ""
	I1201 21:17:09.483885  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.483893  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:09.483899  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:09.483961  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:09.510637  527777 cri.go:89] found id: ""
	I1201 21:17:09.510650  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.510657  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:09.510662  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:09.510719  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:09.542823  527777 cri.go:89] found id: ""
	I1201 21:17:09.542837  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.542844  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:09.542849  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:09.542911  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:09.570165  527777 cri.go:89] found id: ""
	I1201 21:17:09.570184  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.570191  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:09.570196  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:09.570254  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:09.595630  527777 cri.go:89] found id: ""
	I1201 21:17:09.595645  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.595652  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:09.595658  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:09.595722  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:09.621205  527777 cri.go:89] found id: ""
	I1201 21:17:09.621219  527777 logs.go:282] 0 containers: []
	W1201 21:17:09.621226  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:09.621234  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:09.621244  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:09.700160  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:09.700182  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:09.739401  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:09.739425  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:09.809572  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:09.809594  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:09.828869  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:09.828886  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:09.920701  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:09.910986   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.911656   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.913525   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.914123   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:09.915691   16712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.421012  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:12.432213  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:12.432287  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:12.459734  527777 cri.go:89] found id: ""
	I1201 21:17:12.459757  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.459765  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:12.459771  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:12.459840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:12.485671  527777 cri.go:89] found id: ""
	I1201 21:17:12.485685  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.485692  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:12.485698  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:12.485757  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:12.511548  527777 cri.go:89] found id: ""
	I1201 21:17:12.511564  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.511572  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:12.511577  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:12.511637  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:12.542030  527777 cri.go:89] found id: ""
	I1201 21:17:12.542046  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.542053  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:12.542060  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:12.542120  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:12.567661  527777 cri.go:89] found id: ""
	I1201 21:17:12.567675  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.567691  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:12.567696  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:12.567766  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:12.597625  527777 cri.go:89] found id: ""
	I1201 21:17:12.597640  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.597647  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:12.597653  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:12.597718  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:12.623694  527777 cri.go:89] found id: ""
	I1201 21:17:12.623708  527777 logs.go:282] 0 containers: []
	W1201 21:17:12.623715  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:12.623722  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:12.623733  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:12.638757  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:12.638772  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:12.731591  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:12.722231   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.723090   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.724750   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.725287   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:12.726853   16798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:12.731601  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:12.731612  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:12.808720  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:12.808739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:12.838448  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:12.838465  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:15.411670  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:15.422227  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:15.422288  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:15.449244  527777 cri.go:89] found id: ""
	I1201 21:17:15.449267  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.449275  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:15.449281  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:15.449351  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:15.475790  527777 cri.go:89] found id: ""
	I1201 21:17:15.475804  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.475812  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:15.475817  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:15.475883  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:15.505030  527777 cri.go:89] found id: ""
	I1201 21:17:15.505044  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.505052  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:15.505057  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:15.505121  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:15.535702  527777 cri.go:89] found id: ""
	I1201 21:17:15.535717  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.535726  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:15.535732  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:15.535802  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:15.561881  527777 cri.go:89] found id: ""
	I1201 21:17:15.561895  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.561903  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:15.561909  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:15.561968  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:15.589608  527777 cri.go:89] found id: ""
	I1201 21:17:15.589623  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.589631  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:15.589637  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:15.589704  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:15.617545  527777 cri.go:89] found id: ""
	I1201 21:17:15.617559  527777 logs.go:282] 0 containers: []
	W1201 21:17:15.617565  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:15.617573  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:15.617584  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:15.633049  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:15.633067  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:15.719603  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:15.707520   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.708421   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710252   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.710836   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:15.715756   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:15.719617  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:15.719628  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:15.795783  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:15.795806  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:15.829611  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:15.829629  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.397343  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:18.407645  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:18.407707  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:18.431992  527777 cri.go:89] found id: ""
	I1201 21:17:18.432013  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.432020  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:18.432025  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:18.432082  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:18.456900  527777 cri.go:89] found id: ""
	I1201 21:17:18.456914  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.456921  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:18.456927  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:18.456985  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:18.482130  527777 cri.go:89] found id: ""
	I1201 21:17:18.482144  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.482151  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:18.482156  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:18.482216  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:18.506788  527777 cri.go:89] found id: ""
	I1201 21:17:18.506802  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.506809  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:18.506814  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:18.506880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:18.535015  527777 cri.go:89] found id: ""
	I1201 21:17:18.535029  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.535036  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:18.535041  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:18.535102  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:18.561266  527777 cri.go:89] found id: ""
	I1201 21:17:18.561281  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.561288  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:18.561294  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:18.561350  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:18.590006  527777 cri.go:89] found id: ""
	I1201 21:17:18.590020  527777 logs.go:282] 0 containers: []
	W1201 21:17:18.590027  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:18.590034  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:18.590044  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:18.655626  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:18.655644  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:18.673142  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:18.673158  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:18.755072  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:18.747127   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.747701   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749289   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.749738   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:18.751418   17020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:18.755084  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:18.755097  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:18.830997  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:18.831019  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:21.361828  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:21.372633  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:21.372693  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:21.397967  527777 cri.go:89] found id: ""
	I1201 21:17:21.397981  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.398009  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:21.398014  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:21.398083  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:21.424540  527777 cri.go:89] found id: ""
	I1201 21:17:21.424554  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.424570  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:21.424575  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:21.424644  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:21.450905  527777 cri.go:89] found id: ""
	I1201 21:17:21.450920  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.450948  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:21.450954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:21.451029  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:21.483885  527777 cri.go:89] found id: ""
	I1201 21:17:21.483899  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.483906  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:21.483911  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:21.483966  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:21.514135  527777 cri.go:89] found id: ""
	I1201 21:17:21.514149  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.514156  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:21.514162  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:21.514221  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:21.540203  527777 cri.go:89] found id: ""
	I1201 21:17:21.540217  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.540224  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:21.540229  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:21.540285  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:21.570752  527777 cri.go:89] found id: ""
	I1201 21:17:21.570765  527777 logs.go:282] 0 containers: []
	W1201 21:17:21.570772  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:21.570780  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:21.570794  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:21.636631  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:21.636651  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:21.652498  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:21.652516  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:21.739586  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:21.730607   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.731381   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733218   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.733844   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:21.735529   17128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:21.739597  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:21.739609  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:21.815773  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:21.815793  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:24.351500  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:24.361669  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:24.361728  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:24.390941  527777 cri.go:89] found id: ""
	I1201 21:17:24.390955  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.390962  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:24.390968  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:24.391024  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:24.416426  527777 cri.go:89] found id: ""
	I1201 21:17:24.416440  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.416448  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:24.416453  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:24.416510  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:24.443044  527777 cri.go:89] found id: ""
	I1201 21:17:24.443058  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.443065  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:24.443070  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:24.443182  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:24.468754  527777 cri.go:89] found id: ""
	I1201 21:17:24.468769  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.468776  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:24.468781  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:24.468840  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:24.494385  527777 cri.go:89] found id: ""
	I1201 21:17:24.494399  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.494406  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:24.494416  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:24.494477  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:24.519676  527777 cri.go:89] found id: ""
	I1201 21:17:24.519689  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.519696  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:24.519702  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:24.519761  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:24.546000  527777 cri.go:89] found id: ""
	I1201 21:17:24.546014  527777 logs.go:282] 0 containers: []
	W1201 21:17:24.546021  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:24.546028  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:24.546041  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:24.611509  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:24.611529  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:24.626295  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:24.626324  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:24.702708  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:24.694946   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.695784   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697344   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.697621   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:24.699100   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:24.702719  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:24.702731  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:24.784492  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:24.784514  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.320817  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:27.331542  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:27.331602  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:27.357014  527777 cri.go:89] found id: ""
	I1201 21:17:27.357028  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.357035  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:27.357040  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:27.357098  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:27.381792  527777 cri.go:89] found id: ""
	I1201 21:17:27.381806  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.381813  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:27.381818  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:27.381880  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:27.407905  527777 cri.go:89] found id: ""
	I1201 21:17:27.407919  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.407927  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:27.407933  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:27.407994  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:27.433511  527777 cri.go:89] found id: ""
	I1201 21:17:27.433526  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.433533  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:27.433539  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:27.433596  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:27.459609  527777 cri.go:89] found id: ""
	I1201 21:17:27.459622  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.459629  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:27.459635  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:27.459700  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:27.487173  527777 cri.go:89] found id: ""
	I1201 21:17:27.487186  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.487193  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:27.487199  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:27.487257  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:27.512860  527777 cri.go:89] found id: ""
	I1201 21:17:27.512874  527777 logs.go:282] 0 containers: []
	W1201 21:17:27.512881  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:27.512889  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:27.512901  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:27.541723  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:27.541739  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:27.606990  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:27.607009  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:27.622689  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:27.622705  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:27.700563  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:27.692859   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.693627   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695255   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.695560   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:27.697023   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:27.700573  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:27.700586  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.289250  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:30.300157  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:30.300217  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:30.327373  527777 cri.go:89] found id: ""
	I1201 21:17:30.327394  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.327405  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:30.327420  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:30.327492  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:30.353615  527777 cri.go:89] found id: ""
	I1201 21:17:30.353629  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.353636  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:30.353642  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:30.353702  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:30.385214  527777 cri.go:89] found id: ""
	I1201 21:17:30.385228  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.385235  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:30.385240  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:30.385300  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:30.415674  527777 cri.go:89] found id: ""
	I1201 21:17:30.415688  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.415695  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:30.415701  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:30.415767  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:30.442641  527777 cri.go:89] found id: ""
	I1201 21:17:30.442656  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.442663  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:30.442668  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:30.442726  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:30.469997  527777 cri.go:89] found id: ""
	I1201 21:17:30.470010  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.470017  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:30.470023  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:30.470081  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:30.495554  527777 cri.go:89] found id: ""
	I1201 21:17:30.495570  527777 logs.go:282] 0 containers: []
	W1201 21:17:30.495579  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:30.495587  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:30.495599  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:30.559878  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:30.552159   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.552978   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554577   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.554896   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:30.556427   17428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:30.559888  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:30.559899  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:30.635560  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:30.635581  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:30.673666  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:30.673682  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:30.747787  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:30.747808  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.264623  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:33.276366  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:33.276427  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:33.306447  527777 cri.go:89] found id: ""
	I1201 21:17:33.306461  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.306473  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:33.306478  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:33.306538  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:33.334715  527777 cri.go:89] found id: ""
	I1201 21:17:33.334730  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.334738  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:33.334744  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:33.334814  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:33.365674  527777 cri.go:89] found id: ""
	I1201 21:17:33.365690  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.365698  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:33.365705  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:33.365774  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:33.396072  527777 cri.go:89] found id: ""
	I1201 21:17:33.396089  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.396096  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:33.396103  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:33.396175  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:33.429356  527777 cri.go:89] found id: ""
	I1201 21:17:33.429372  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.429381  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:33.429387  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:33.429461  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:33.457917  527777 cri.go:89] found id: ""
	I1201 21:17:33.457932  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.457941  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:33.457948  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:33.458022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:33.490167  527777 cri.go:89] found id: ""
	I1201 21:17:33.490182  527777 logs.go:282] 0 containers: []
	W1201 21:17:33.490190  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:33.490199  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:33.490212  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:33.558131  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:33.558155  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:33.575080  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:33.575101  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:33.657808  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:33.644900   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.645597   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647206   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.647744   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:33.649342   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:33.657834  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:33.657848  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:33.754296  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:33.754323  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:36.289647  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:36.300774  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:36.300833  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:36.327492  527777 cri.go:89] found id: ""
	I1201 21:17:36.327507  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.327514  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:36.327520  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:36.327583  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:36.359515  527777 cri.go:89] found id: ""
	I1201 21:17:36.359529  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.359537  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:36.359542  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:36.359606  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:36.387977  527777 cri.go:89] found id: ""
	I1201 21:17:36.387990  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.387997  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:36.388002  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:36.388058  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:36.413410  527777 cri.go:89] found id: ""
	I1201 21:17:36.413429  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.413436  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:36.413442  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:36.413499  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:36.440588  527777 cri.go:89] found id: ""
	I1201 21:17:36.440614  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.440622  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:36.440627  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:36.440698  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:36.471404  527777 cri.go:89] found id: ""
	I1201 21:17:36.471419  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.471427  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:36.471433  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:36.471500  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:36.499502  527777 cri.go:89] found id: ""
	I1201 21:17:36.499518  527777 logs.go:282] 0 containers: []
	W1201 21:17:36.499528  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:36.499536  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:36.499546  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:36.568027  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:36.568052  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:36.584561  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:36.584580  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:36.665718  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:36.648985   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.649527   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651261   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.651644   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:36.653266   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:36.665728  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:36.665740  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:36.748791  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:36.748812  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.285189  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:39.296369  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:17:39.296438  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:17:39.323280  527777 cri.go:89] found id: ""
	I1201 21:17:39.323294  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.323306  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:17:39.323312  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:17:39.323379  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:17:39.352092  527777 cri.go:89] found id: ""
	I1201 21:17:39.352107  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.352115  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:17:39.352120  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:17:39.352187  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:17:39.379352  527777 cri.go:89] found id: ""
	I1201 21:17:39.379367  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.379375  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:17:39.379382  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:17:39.379446  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:17:39.406925  527777 cri.go:89] found id: ""
	I1201 21:17:39.406940  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.406947  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:17:39.406954  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:17:39.407022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:17:39.434427  527777 cri.go:89] found id: ""
	I1201 21:17:39.434442  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.434450  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:17:39.434455  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:17:39.434521  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:17:39.466725  527777 cri.go:89] found id: ""
	I1201 21:17:39.466741  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.466748  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:17:39.466755  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:17:39.466821  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:17:39.494952  527777 cri.go:89] found id: ""
	I1201 21:17:39.494968  527777 logs.go:282] 0 containers: []
	W1201 21:17:39.494976  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:17:39.494985  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:17:39.494998  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:17:39.510984  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:17:39.511002  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:17:39.585968  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:17:39.576561   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.577151   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.578340   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.579982   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:17:39.580410   17752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:17:39.585981  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:17:39.585993  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:17:39.669009  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:17:39.669033  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:17:39.705170  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:17:39.705189  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 21:17:42.275450  527777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:17:42.287572  527777 kubeadm.go:602] duration metric: took 4m1.888207918s to restartPrimaryControlPlane
	W1201 21:17:42.287658  527777 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1201 21:17:42.287747  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:17:42.711674  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:17:42.725511  527777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 21:17:42.734239  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:17:42.734308  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:17:42.743050  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:17:42.743060  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:17:42.743120  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:17:42.751678  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:17:42.751731  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:17:42.759481  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:17:42.767903  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:17:42.767964  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:17:42.776067  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.784283  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:17:42.784355  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:17:42.792582  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:17:42.801449  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:17:42.801518  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:17:42.809783  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:17:42.849635  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:17:42.849689  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:17:42.929073  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:17:42.929165  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:17:42.929199  527777 kubeadm.go:319] OS: Linux
	I1201 21:17:42.929243  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:17:42.929296  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:17:42.929342  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:17:42.929388  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:17:42.929435  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:17:42.929482  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:17:42.929526  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:17:42.929573  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:17:42.929617  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:17:43.002025  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:17:43.002165  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:17:43.002258  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:17:43.013458  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:17:43.017000  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:17:43.017095  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:17:43.017170  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:17:43.017252  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:17:43.017311  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:17:43.017379  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:17:43.017434  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:17:43.017501  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:17:43.017561  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:17:43.017634  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:17:43.017705  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:17:43.017832  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:17:43.017892  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:17:43.133992  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:17:43.467350  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:17:43.613021  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:17:43.910424  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:17:44.196121  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:17:44.196632  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:17:44.199145  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:17:44.202480  527777 out.go:252]   - Booting up control plane ...
	I1201 21:17:44.202575  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:17:44.202651  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:17:44.202718  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:17:44.217388  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:17:44.217714  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:17:44.228031  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:17:44.228400  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:17:44.228517  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:17:44.357408  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:17:44.357522  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:21:44.357404  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000240491s
	I1201 21:21:44.357429  527777 kubeadm.go:319] 
	I1201 21:21:44.357487  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:21:44.357523  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:21:44.357633  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:21:44.357637  527777 kubeadm.go:319] 
	I1201 21:21:44.357830  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:21:44.357863  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:21:44.357893  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:21:44.357896  527777 kubeadm.go:319] 
	I1201 21:21:44.361511  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.361943  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:44.362051  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:21:44.362287  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:21:44.362292  527777 kubeadm.go:319] 
	I1201 21:21:44.362361  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1201 21:21:44.362491  527777 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240491s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1201 21:21:44.362579  527777 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 21:21:44.772977  527777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:21:44.786214  527777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 21:21:44.786270  527777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 21:21:44.794556  527777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 21:21:44.794568  527777 kubeadm.go:158] found existing configuration files:
	
	I1201 21:21:44.794622  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1201 21:21:44.803048  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 21:21:44.803106  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 21:21:44.810695  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1201 21:21:44.818882  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 21:21:44.818947  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 21:21:44.827077  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.834936  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 21:21:44.834995  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 21:21:44.843074  527777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1201 21:21:44.851084  527777 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 21:21:44.851166  527777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 21:21:44.858721  527777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 21:21:44.981319  527777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 21:21:44.981788  527777 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 21:21:45.157392  527777 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 21:25:46.243317  527777 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 21:25:46.243344  527777 kubeadm.go:319] 
	I1201 21:25:46.243413  527777 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 21:25:46.246817  527777 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 21:25:46.246871  527777 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 21:25:46.246962  527777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 21:25:46.247022  527777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 21:25:46.247057  527777 kubeadm.go:319] OS: Linux
	I1201 21:25:46.247100  527777 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 21:25:46.247175  527777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 21:25:46.247246  527777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 21:25:46.247312  527777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 21:25:46.247369  527777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 21:25:46.247421  527777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 21:25:46.247464  527777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 21:25:46.247511  527777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 21:25:46.247555  527777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 21:25:46.247626  527777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 21:25:46.247719  527777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 21:25:46.247811  527777 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 21:25:46.247872  527777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 21:25:46.250950  527777 out.go:252]   - Generating certificates and keys ...
	I1201 21:25:46.251041  527777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 21:25:46.251105  527777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 21:25:46.251224  527777 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 21:25:46.251290  527777 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 21:25:46.251369  527777 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 21:25:46.251431  527777 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 21:25:46.251495  527777 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 21:25:46.251555  527777 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 21:25:46.251629  527777 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 21:25:46.251704  527777 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 21:25:46.251741  527777 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 21:25:46.251795  527777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 21:25:46.251845  527777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 21:25:46.251899  527777 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 21:25:46.251951  527777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 21:25:46.252012  527777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 21:25:46.252065  527777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 21:25:46.252149  527777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 21:25:46.252213  527777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 21:25:46.255065  527777 out.go:252]   - Booting up control plane ...
	I1201 21:25:46.255213  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 21:25:46.255292  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 21:25:46.255359  527777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 21:25:46.255466  527777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 21:25:46.255590  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 21:25:46.255713  527777 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 21:25:46.255816  527777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 21:25:46.255856  527777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 21:25:46.256011  527777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 21:25:46.256134  527777 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 21:25:46.256200  527777 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000272278s
	I1201 21:25:46.256203  527777 kubeadm.go:319] 
	I1201 21:25:46.256259  527777 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 21:25:46.256290  527777 kubeadm.go:319] 	- The kubelet is not running
	I1201 21:25:46.256400  527777 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 21:25:46.256404  527777 kubeadm.go:319] 
	I1201 21:25:46.256508  527777 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 21:25:46.256540  527777 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 21:25:46.256569  527777 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 21:25:46.256592  527777 kubeadm.go:319] 
	I1201 21:25:46.256631  527777 kubeadm.go:403] duration metric: took 12m5.895739008s to StartCluster
	I1201 21:25:46.256661  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 21:25:46.256721  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 21:25:46.286008  527777 cri.go:89] found id: ""
	I1201 21:25:46.286022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.286029  527777 logs.go:284] No container was found matching "kube-apiserver"
	I1201 21:25:46.286034  527777 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 21:25:46.286096  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 21:25:46.311936  527777 cri.go:89] found id: ""
	I1201 21:25:46.311950  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.311957  527777 logs.go:284] No container was found matching "etcd"
	I1201 21:25:46.311963  527777 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 21:25:46.312022  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 21:25:46.338008  527777 cri.go:89] found id: ""
	I1201 21:25:46.338022  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.338029  527777 logs.go:284] No container was found matching "coredns"
	I1201 21:25:46.338035  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 21:25:46.338094  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 21:25:46.364430  527777 cri.go:89] found id: ""
	I1201 21:25:46.364446  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.364453  527777 logs.go:284] No container was found matching "kube-scheduler"
	I1201 21:25:46.364459  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 21:25:46.364519  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 21:25:46.390553  527777 cri.go:89] found id: ""
	I1201 21:25:46.390568  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.390574  527777 logs.go:284] No container was found matching "kube-proxy"
	I1201 21:25:46.390580  527777 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 21:25:46.390638  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 21:25:46.416135  527777 cri.go:89] found id: ""
	I1201 21:25:46.416149  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.416156  527777 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 21:25:46.416161  527777 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 21:25:46.416215  527777 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 21:25:46.441110  527777 cri.go:89] found id: ""
	I1201 21:25:46.441124  527777 logs.go:282] 0 containers: []
	W1201 21:25:46.441131  527777 logs.go:284] No container was found matching "kindnet"
	I1201 21:25:46.441139  527777 logs.go:123] Gathering logs for dmesg ...
	I1201 21:25:46.441160  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 21:25:46.456311  527777 logs.go:123] Gathering logs for describe nodes ...
	I1201 21:25:46.456328  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 21:25:46.535568  527777 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1201 21:25:46.527894   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.528437   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.529932   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.530345   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:46.531878   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 21:25:46.535579  527777 logs.go:123] Gathering logs for CRI-O ...
	I1201 21:25:46.535591  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 21:25:46.613336  527777 logs.go:123] Gathering logs for container status ...
	I1201 21:25:46.613357  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 21:25:46.643384  527777 logs.go:123] Gathering logs for kubelet ...
	I1201 21:25:46.643410  527777 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1201 21:25:46.714793  527777 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1201 21:25:46.714844  527777 out.go:285] * 
	W1201 21:25:46.714913  527777 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.714940  527777 out.go:285] * 
	W1201 21:25:46.717121  527777 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 21:25:46.722121  527777 out.go:203] 
	W1201 21:25:46.725981  527777 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000272278s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 21:25:46.726037  527777 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1201 21:25:46.726060  527777 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1201 21:25:46.729457  527777 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028547362Z" level=info msg="starting plugins..."
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028562434Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 01 21:13:39 functional-198694 crio[10476]: time="2025-12-01T21:13:39.028635598Z" level=info msg="No systemd watchdog enabled"
	Dec 01 21:13:39 functional-198694 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.006897207Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6899020c-e81d-4ca2-b78d-1b19ba925f8d name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.008172907Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=85515e67-9e24-4eed-9690-db5bbe0ab759 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.009097715Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c8e368b2-0181-4d6d-8bf3-4e28d45c02c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.009733916Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0fc0011a-c9a7-42d2-a5b8-995e0a543565 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.010282103Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=345cda63-6b08-4110-81b2-46c3bae48473 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.010980374Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=5390fbb3-60dc-4145-9a48-c3c46e1b2cb6 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:17:43 functional-198694 crio[10476]: time="2025-12-01T21:17:43.011663876Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=6ae96308-5839-4062-8cab-2394de4e389c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.162637929Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d0f240a5-2441-4e93-9b4a-f3d4bd7ad9c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.164075956Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0616d176-adc2-492a-ae1c-f0f024bafeaf name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.164807688Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=b908770b-6817-4806-aa77-5607a1538338 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.167796208Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3df13166-7c04-4daf-93af-7c9be539fdad name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.168806661Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=16a3784c-0cb1-4a72-824d-e721ee5352ce name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.16959947Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=de159f67-5257-40aa-8e51-ddafe4c8e78c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:21:45 functional-198694 crio[10476]: time="2025-12-01T21:21:45.170614172Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1626a35a-f287-41b9-b7fb-8a0f7945ff57 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:55 functional-198694 crio[10476]: time="2025-12-01T21:25:55.876002109Z" level=info msg="Checking image status: kicbase/echo-server:functional-198694" id=e5e42278-5f4a-44c8-8df2-4801b75cda77 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:55 functional-198694 crio[10476]: time="2025-12-01T21:25:55.958015039Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-198694" id=60b0690d-119a-4b74-971b-527f5644551b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:55 functional-198694 crio[10476]: time="2025-12-01T21:25:55.958188786Z" level=info msg="Image docker.io/kicbase/echo-server:functional-198694 not found" id=60b0690d-119a-4b74-971b-527f5644551b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:55 functional-198694 crio[10476]: time="2025-12-01T21:25:55.958248239Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-198694 found" id=60b0690d-119a-4b74-971b-527f5644551b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002277819Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-198694" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002458196Z" level=info msg="Image localhost/kicbase/echo-server:functional-198694 not found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	Dec 01 21:25:56 functional-198694 crio[10476]: time="2025-12-01T21:25:56.002508862Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-198694 found" id=9d0f6f34-dff8-41d4-bb31-fb357f4c68af name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1201 21:25:57.125989   22316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:57.126973   22316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:57.128622   22316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:57.129138   22316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1201 21:25:57.130702   22316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 1 19:31] hrtimer: interrupt took 3224715 ns
	[Dec 1 20:00] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:16] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 1 20:22] systemd-journald[231]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 1 20:37] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 1 20:38] overlayfs: idmapped layers are currently not supported
	[  +0.076902] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 1 20:44] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:45] overlayfs: idmapped layers are currently not supported
	[Dec 1 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:25:57 up  3:08,  0 user,  load average: 0.52, 0.28, 0.42
	Linux functional-198694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 21:25:54 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:25:55 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 01 21:25:55 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:55 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:55 functional-198694 kubelet[22102]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:55 functional-198694 kubelet[22102]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:55 functional-198694 kubelet[22102]: E1201 21:25:55.270994   22102 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:25:55 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:25:55 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:25:55 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 01 21:25:55 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:55 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:56 functional-198694 kubelet[22159]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:56 functional-198694 kubelet[22159]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:56 functional-198694 kubelet[22159]: E1201 21:25:56.042302   22159 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:25:56 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:25:56 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 21:25:56 functional-198694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 654.
	Dec 01 21:25:56 functional-198694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:56 functional-198694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 21:25:56 functional-198694 kubelet[22197]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:56 functional-198694 kubelet[22197]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 21:25:56 functional-198694 kubelet[22197]: E1201 21:25:56.723941   22197 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 21:25:56 functional-198694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 21:25:56 functional-198694 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-198694 -n functional-198694: exit status 2 (512.984959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-198694" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image load --daemon kicbase/echo-server:functional-198694 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-198694" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image load --daemon kicbase/echo-server:functional-198694 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-198694" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-198694
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image load --daemon kicbase/echo-server:functional-198694 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-198694" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image save kicbase/echo-server:functional-198694 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1201 21:25:59.987734  541958 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:25:59.987967  541958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:25:59.987981  541958 out.go:374] Setting ErrFile to fd 2...
	I1201 21:25:59.987988  541958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:25:59.988325  541958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:25:59.989035  541958 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:25:59.989225  541958 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:25:59.989889  541958 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
	I1201 21:26:00.018569  541958 ssh_runner.go:195] Run: systemctl --version
	I1201 21:26:00.018627  541958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
	I1201 21:26:00.086672  541958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
	I1201 21:26:00.279401  541958 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1201 21:26:00.279489  541958 cache_images.go:255] Failed to load cached images for "functional-198694": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1201 21:26:00.279524  541958 cache_images.go:267] failed pushing to: functional-198694

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-198694
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image save --daemon kicbase/echo-server:functional-198694 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-198694
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-198694: exit status 1 (17.724888ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-198694

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-198694

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-198694 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-198694 create deployment hello-node --image kicbase/echo-server: exit status 1 (77.23845ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-198694 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 service list: exit status 103 (346.098649ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-198694 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-198694"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-198694 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-198694 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-198694\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 service list -o json: exit status 103 (352.219632ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-198694 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-198694"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-198694 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 service --namespace=default --https --url hello-node: exit status 103 (370.892263ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-198694 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-198694"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-198694 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1201 21:26:01.955065  542540 out.go:360] Setting OutFile to fd 1 ...
I1201 21:26:01.955325  542540 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:26:01.955342  542540 out.go:374] Setting ErrFile to fd 2...
I1201 21:26:01.955349  542540 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:26:01.955745  542540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 21:26:01.956219  542540 mustload.go:66] Loading cluster: functional-198694
I1201 21:26:01.956755  542540 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:26:01.957306  542540 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
I1201 21:26:02.006860  542540 host.go:66] Checking if "functional-198694" exists ...
I1201 21:26:02.007312  542540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1201 21:26:02.119855  542540 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:26:02.103442375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1201 21:26:02.119973  542540 api_server.go:166] Checking apiserver status ...
I1201 21:26:02.120035  542540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1201 21:26:02.120081  542540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
I1201 21:26:02.168267  542540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
W1201 21:26:02.327213  542540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1201 21:26:02.330436  542540 out.go:179] * The control-plane node functional-198694 apiserver is not running: (state=Stopped)
I1201 21:26:02.333392  542540 out.go:179]   To start a cluster, run: "minikube start -p functional-198694"

                                                
                                                
stdout: * The control-plane node functional-198694 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-198694"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 542541: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 service hello-node --url --format={{.IP}}: exit status 103 (411.727313ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-198694 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-198694"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-198694 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-198694 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-198694\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 service hello-node --url: exit status 103 (373.724877ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-198694 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-198694"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-198694 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-198694 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-198694"
functional_test.go:1579: failed to parse "* The control-plane node functional-198694 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-198694\"": parse "* The control-plane node functional-198694 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-198694\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-198694 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-198694 apply -f testdata/testsvc.yaml: exit status 1 (192.263063ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-198694 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (102.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.111.187.170": Temporary Error: Get "http://10.111.187.170": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-198694 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-198694 get svc nginx-svc: exit status 1 (56.584677ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-198694 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (102.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764624471985109686" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764624471985109686" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764624471985109686" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001/test-1764624471985109686
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.822669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 21:27:52.362216  486002 retry.go:31] will retry after 319.824703ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "findmnt -T /mount-9p | grep 9p"
E1201 21:27:52.876118  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  1 21:27 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  1 21:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  1 21:27 test-1764624471985109686
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh cat /mount-9p/test-1764624471985109686
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-198694 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-198694 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (61.512429ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-198694 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (293.569207ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=42999)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  1 21:27 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  1 21:27 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  1 21:27 test-1764624471985109686
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-198694 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:42999
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001:/mount-9p --alsologtostderr -v=1] stderr:
I1201 21:27:52.046609  545018 out.go:360] Setting OutFile to fd 1 ...
I1201 21:27:52.046811  545018 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:27:52.046833  545018 out.go:374] Setting ErrFile to fd 2...
I1201 21:27:52.046849  545018 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:27:52.047188  545018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 21:27:52.047558  545018 mustload.go:66] Loading cluster: functional-198694
I1201 21:27:52.047978  545018 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:27:52.048588  545018 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
I1201 21:27:52.071380  545018 host.go:66] Checking if "functional-198694" exists ...
I1201 21:27:52.071714  545018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1201 21:27:52.199872  545018 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:27:52.18029104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1201 21:27:52.200051  545018 cli_runner.go:164] Run: docker network inspect functional-198694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1201 21:27:52.234924  545018 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001 into VM as /mount-9p ...
I1201 21:27:52.238048  545018 out.go:179]   - Mount type:   9p
I1201 21:27:52.240942  545018 out.go:179]   - User ID:      docker
I1201 21:27:52.243828  545018 out.go:179]   - Group ID:     docker
I1201 21:27:52.246698  545018 out.go:179]   - Version:      9p2000.L
I1201 21:27:52.251971  545018 out.go:179]   - Message Size: 262144
I1201 21:27:52.254812  545018 out.go:179]   - Options:      map[]
I1201 21:27:52.263411  545018 out.go:179]   - Bind Address: 192.168.49.1:42999
I1201 21:27:52.266339  545018 out.go:179] * Userspace file server: 
I1201 21:27:52.266680  545018 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1201 21:27:52.266792  545018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
I1201 21:27:52.286756  545018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
I1201 21:27:52.390037  545018 mount.go:180] unmount for /mount-9p ran successfully
I1201 21:27:52.390062  545018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1201 21:27:52.398855  545018 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=42999,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1201 21:27:52.409483  545018 main.go:127] stdlog: ufs.go:141 connected
I1201 21:27:52.409654  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tversion tag 65535 msize 262144 version '9P2000.L'
I1201 21:27:52.409697  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rversion tag 65535 msize 262144 version '9P2000'
I1201 21:27:52.409916  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1201 21:27:52.409987  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rattach tag 0 aqid (c9d249 dbd097ab 'd')
I1201 21:27:52.410275  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 0
I1201 21:27:52.410330  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d249 dbd097ab 'd') m d775 at 0 mt 1764624471 l 4096 t 0 d 0 ext )
I1201 21:27:52.411688  545018 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/.mount-process: {Name:mkc79a65018d7566b5790740f910c9cb65765951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 21:27:52.411874  545018 mount.go:105] mount successful: ""
I1201 21:27:52.415245  545018 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2848316081/001 to /mount-9p
I1201 21:27:52.418099  545018 out.go:203] 
I1201 21:27:52.420893  545018 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1201 21:27:53.239684  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 0
I1201 21:27:53.239768  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d249 dbd097ab 'd') m d775 at 0 mt 1764624471 l 4096 t 0 d 0 ext )
I1201 21:27:53.240128  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 1 
I1201 21:27:53.240199  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 
I1201 21:27:53.240360  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Topen tag 0 fid 1 mode 0
I1201 21:27:53.240422  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Ropen tag 0 qid (c9d249 dbd097ab 'd') iounit 0
I1201 21:27:53.240607  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 0
I1201 21:27:53.240669  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d249 dbd097ab 'd') m d775 at 0 mt 1764624471 l 4096 t 0 d 0 ext )
I1201 21:27:53.240849  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 1 offset 0 count 262120
I1201 21:27:53.240973  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 258
I1201 21:27:53.241113  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 1 offset 258 count 261862
I1201 21:27:53.241142  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 0
I1201 21:27:53.241275  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 1 offset 258 count 262120
I1201 21:27:53.241307  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 0
I1201 21:27:53.241446  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 2 0:'test-1764624471985109686' 
I1201 21:27:53.241484  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 (c9d24c dbd097ab '') 
I1201 21:27:53.241603  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.241638  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('test-1764624471985109686' 'jenkins' 'jenkins' '' q (c9d24c dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.241785  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.241819  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('test-1764624471985109686' 'jenkins' 'jenkins' '' q (c9d24c dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.241935  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 2
I1201 21:27:53.241968  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.242106  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1201 21:27:53.242145  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 (c9d24a dbd097ab '') 
I1201 21:27:53.242256  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.242288  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d24a dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.242417  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.242450  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d24a dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.242566  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 2
I1201 21:27:53.242589  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.242792  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1201 21:27:53.242850  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 (c9d24b dbd097ab '') 
I1201 21:27:53.242993  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.243032  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d24b dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.243168  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.243215  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d24b dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.243335  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 2
I1201 21:27:53.243367  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.243495  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 1 offset 258 count 262120
I1201 21:27:53.243524  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 0
I1201 21:27:53.243667  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 1
I1201 21:27:53.243699  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.531411  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 1 0:'test-1764624471985109686' 
I1201 21:27:53.531487  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 (c9d24c dbd097ab '') 
I1201 21:27:53.531674  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 1
I1201 21:27:53.531725  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('test-1764624471985109686' 'jenkins' 'jenkins' '' q (c9d24c dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.531871  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 1 newfid 2 
I1201 21:27:53.531901  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 
I1201 21:27:53.532041  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Topen tag 0 fid 2 mode 0
I1201 21:27:53.532103  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Ropen tag 0 qid (c9d24c dbd097ab '') iounit 0
I1201 21:27:53.532245  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 1
I1201 21:27:53.532284  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('test-1764624471985109686' 'jenkins' 'jenkins' '' q (c9d24c dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.532440  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 2 offset 0 count 262120
I1201 21:27:53.532480  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 24
I1201 21:27:53.532607  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 2 offset 24 count 262120
I1201 21:27:53.532637  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 0
I1201 21:27:53.532805  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 2 offset 24 count 262120
I1201 21:27:53.532858  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 0
I1201 21:27:53.533036  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 2
I1201 21:27:53.533076  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.533257  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 1
I1201 21:27:53.533287  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.890509  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 0
I1201 21:27:53.890584  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d249 dbd097ab 'd') m d775 at 0 mt 1764624471 l 4096 t 0 d 0 ext )
I1201 21:27:53.890994  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 1 
I1201 21:27:53.891041  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 
I1201 21:27:53.891229  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Topen tag 0 fid 1 mode 0
I1201 21:27:53.891322  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Ropen tag 0 qid (c9d249 dbd097ab 'd') iounit 0
I1201 21:27:53.891482  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 0
I1201 21:27:53.891532  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d249 dbd097ab 'd') m d775 at 0 mt 1764624471 l 4096 t 0 d 0 ext )
I1201 21:27:53.891720  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 1 offset 0 count 262120
I1201 21:27:53.891846  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 258
I1201 21:27:53.891999  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 1 offset 258 count 261862
I1201 21:27:53.892037  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 0
I1201 21:27:53.892170  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 1 offset 258 count 262120
I1201 21:27:53.892200  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 0
I1201 21:27:53.892345  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 2 0:'test-1764624471985109686' 
I1201 21:27:53.892382  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 (c9d24c dbd097ab '') 
I1201 21:27:53.892506  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.892543  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('test-1764624471985109686' 'jenkins' 'jenkins' '' q (c9d24c dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.892745  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.892799  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('test-1764624471985109686' 'jenkins' 'jenkins' '' q (c9d24c dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.892985  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 2
I1201 21:27:53.893014  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.893175  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1201 21:27:53.893238  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 (c9d24a dbd097ab '') 
I1201 21:27:53.893369  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.893410  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d24a dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.893543  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.893576  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d24a dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.893690  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 2
I1201 21:27:53.893711  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.893862  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1201 21:27:53.893902  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rwalk tag 0 (c9d24b dbd097ab '') 
I1201 21:27:53.894019  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.894061  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d24b dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.894198  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tstat tag 0 fid 2
I1201 21:27:53.894235  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d24b dbd097ab '') m 644 at 0 mt 1764624471 l 24 t 0 d 0 ext )
I1201 21:27:53.894403  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 2
I1201 21:27:53.894432  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.894564  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tread tag 0 fid 1 offset 258 count 262120
I1201 21:27:53.894597  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rread tag 0 count 0
I1201 21:27:53.894756  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 1
I1201 21:27:53.894797  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:53.896118  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1201 21:27:53.896198  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rerror tag 0 ename 'file not found' ecode 0
I1201 21:27:54.165909  545018 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36148 Tclunk tag 0 fid 0
I1201 21:27:54.165957  545018 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36148 Rclunk tag 0
I1201 21:27:54.167363  545018 main.go:127] stdlog: ufs.go:147 disconnected
I1201 21:27:54.187291  545018 out.go:179] * Unmounting /mount-9p ...
I1201 21:27:54.190368  545018 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1201 21:27:54.197597  545018 mount.go:180] unmount for /mount-9p ran successfully
I1201 21:27:54.197716  545018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/.mount-process: {Name:mkc79a65018d7566b5790740f910c9cb65765951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 21:27:54.200886  545018 out.go:203] 
W1201 21:27:54.203914  545018 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1201 21:27:54.206849  545018 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.87s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-669097 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-669097 --output=json --user=testUser: exit status 80 (1.868599354s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"311878c0-47bd-42b0-88ce-4a7ed6d1a4c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-669097 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d8bc0086-bef0-473d-9418-8bda60582ef1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-01T21:44:16Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"2131927f-b823-4260-8678-d7e9344498ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-669097 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.87s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.08s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-669097 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-669097 --output=json --user=testUser: exit status 80 (2.082296317s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c09f1a6d-ae14-49b4-957f-d3e27fd14acb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-669097 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"4a83ddbb-23af-4b89-9515-228217d50eb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-01T21:44:18Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"554b7a32-9b52-4229-94ac-927676fe2a17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-669097 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (799.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-738753 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-738753 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.893855044s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-738753
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-738753: (3.286469393s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-738753 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-738753 status --format={{.Host}}: exit status 7 (100.285435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-738753 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1201 22:02:52.876343  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-738753 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m36.434198275s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-738753] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-738753" primary control-plane node in "kubernetes-upgrade-738753" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 22:02:36.861509  661844 out.go:360] Setting OutFile to fd 1 ...
	I1201 22:02:36.861680  661844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:02:36.861693  661844 out.go:374] Setting ErrFile to fd 2...
	I1201 22:02:36.861698  661844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:02:36.862009  661844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 22:02:36.862460  661844 out.go:368] Setting JSON to false
	I1201 22:02:36.863565  661844 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":13506,"bootTime":1764613051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 22:02:36.863665  661844 start.go:143] virtualization:  
	I1201 22:02:36.870164  661844 out.go:179] * [kubernetes-upgrade-738753] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 22:02:36.873619  661844 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 22:02:36.873734  661844 notify.go:221] Checking for updates...
	I1201 22:02:36.879458  661844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 22:02:36.882394  661844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 22:02:36.885278  661844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 22:02:36.888183  661844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 22:02:36.891371  661844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 22:02:36.894849  661844 config.go:182] Loaded profile config "kubernetes-upgrade-738753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 22:02:36.895512  661844 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 22:02:36.940913  661844 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 22:02:36.941052  661844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 22:02:37.045320  661844 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-01 22:02:37.027676706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 22:02:37.045430  661844 docker.go:319] overlay module found
	I1201 22:02:37.048750  661844 out.go:179] * Using the docker driver based on existing profile
	I1201 22:02:37.051738  661844 start.go:309] selected driver: docker
	I1201 22:02:37.051760  661844 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-738753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-738753 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:02:37.051845  661844 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 22:02:37.052509  661844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 22:02:37.153442  661844 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-01 22:02:37.142498508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 22:02:37.153793  661844 cni.go:84] Creating CNI manager for ""
	I1201 22:02:37.153853  661844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 22:02:37.153889  661844 start.go:353] cluster config:
	{Name:kubernetes-upgrade-738753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-738753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:02:37.157072  661844 out.go:179] * Starting "kubernetes-upgrade-738753" primary control-plane node in "kubernetes-upgrade-738753" cluster
	I1201 22:02:37.163709  661844 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 22:02:37.166651  661844 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 22:02:37.170957  661844 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 22:02:37.171183  661844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 22:02:37.228227  661844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 22:02:37.228249  661844 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 22:02:37.243019  661844 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1201 22:02:37.441739  661844 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1201 22:02:37.442030  661844 cache.go:107] acquiring lock: {Name:mkc02adc0b0ac86da96d7b1c6f73dd96db198bdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:02:37.442255  661844 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 22:02:37.442305  661844 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 286.722µs
	I1201 22:02:37.442730  661844 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 22:02:37.442485  661844 cache.go:107] acquiring lock: {Name:mk453dcc67fddeb9d4497c9de9efb4fa1295449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:02:37.442519  661844 cache.go:107] acquiring lock: {Name:mk419ddf7fad28d46855543ef84396416e53becc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:02:37.442925  661844 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 22:02:37.442934  661844 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 416.591µs
	I1201 22:02:37.442942  661844 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 22:02:37.442565  661844 cache.go:107] acquiring lock: {Name:mka55d294ab8a696f44b35601f713e0abbf24c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:02:37.442976  661844 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 22:02:37.442985  661844 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 421.89µs
	I1201 22:02:37.442991  661844 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 22:02:37.442590  661844 cache.go:107] acquiring lock: {Name:mk6dcec1fac0989e081c750d70caa7d5974f0e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:02:37.443014  661844 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 22:02:37.443019  661844 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 430.112µs
	I1201 22:02:37.443030  661844 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 22:02:37.442605  661844 cache.go:107] acquiring lock: {Name:mkf9aa1f704582196eb72cf90c132f43843b4423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:02:37.443053  661844 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1201 22:02:37.443058  661844 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 454.858µs
	I1201 22:02:37.443063  661844 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 22:02:37.442645  661844 cache.go:107] acquiring lock: {Name:mk60d129c4890b38a9b86e2bfa4a9fa21bc4f57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:02:37.443083  661844 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 22:02:37.443088  661844 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 444.569µs
	I1201 22:02:37.443093  661844 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 22:02:37.442673  661844 cache.go:107] acquiring lock: {Name:mk345d9c863dd9143d9156cb17f795118869c197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:02:37.443115  661844 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 22:02:37.443120  661844 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 447.572µs
	I1201 22:02:37.443125  661844 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 22:02:37.443195  661844 cache.go:115] /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 22:02:37.443205  661844 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 725.696µs
	I1201 22:02:37.443212  661844 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 22:02:37.443230  661844 cache.go:87] Successfully saved all images to host disk.
	I1201 22:02:37.443301  661844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/config.json ...
	I1201 22:02:37.443584  661844 cache.go:243] Successfully downloaded all kic artifacts
	I1201 22:02:37.443633  661844 start.go:360] acquireMachinesLock for kubernetes-upgrade-738753: {Name:mkf9c9c32cb8c44c17fe4c6387d9366c625531f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:02:37.443676  661844 start.go:364] duration metric: took 29.267µs to acquireMachinesLock for "kubernetes-upgrade-738753"
	I1201 22:02:37.443692  661844 start.go:96] Skipping create...Using existing machine configuration
	I1201 22:02:37.443697  661844 fix.go:54] fixHost starting: 
	I1201 22:02:37.444065  661844 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738753 --format={{.State.Status}}
	I1201 22:02:37.491786  661844 fix.go:112] recreateIfNeeded on kubernetes-upgrade-738753: state=Stopped err=<nil>
	W1201 22:02:37.491899  661844 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 22:02:37.495510  661844 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-738753" ...
	I1201 22:02:37.495686  661844 cli_runner.go:164] Run: docker start kubernetes-upgrade-738753
	I1201 22:02:38.001219  661844 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738753 --format={{.State.Status}}
	I1201 22:02:38.076859  661844 kic.go:430] container "kubernetes-upgrade-738753" state is running.
	I1201 22:02:38.077320  661844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-738753
	I1201 22:02:38.179700  661844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/config.json ...
	I1201 22:02:38.182776  661844 machine.go:94] provisionDockerMachine start ...
	I1201 22:02:38.182878  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:38.251337  661844 main.go:143] libmachine: Using SSH client type: native
	I1201 22:02:38.251671  661844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1201 22:02:38.251682  661844 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 22:02:38.252365  661844 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1201 22:02:41.408495  661844 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-738753
	
	I1201 22:02:41.408521  661844 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-738753"
	I1201 22:02:41.408595  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:41.426297  661844 main.go:143] libmachine: Using SSH client type: native
	I1201 22:02:41.426614  661844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1201 22:02:41.426627  661844 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-738753 && echo "kubernetes-upgrade-738753" | sudo tee /etc/hostname
	I1201 22:02:41.594613  661844 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-738753
	
	I1201 22:02:41.594751  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:41.618188  661844 main.go:143] libmachine: Using SSH client type: native
	I1201 22:02:41.618491  661844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1201 22:02:41.618510  661844 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-738753' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-738753/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-738753' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 22:02:41.780626  661844 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 22:02:41.780649  661844 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 22:02:41.780677  661844 ubuntu.go:190] setting up certificates
	I1201 22:02:41.780687  661844 provision.go:84] configureAuth start
	I1201 22:02:41.780749  661844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-738753
	I1201 22:02:41.804073  661844 provision.go:143] copyHostCerts
	I1201 22:02:41.804138  661844 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 22:02:41.804151  661844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 22:02:41.804220  661844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 22:02:41.804335  661844 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 22:02:41.804341  661844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 22:02:41.804384  661844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 22:02:41.804472  661844 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 22:02:41.804480  661844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 22:02:41.804503  661844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 22:02:41.804547  661844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-738753 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-738753 localhost minikube]
	I1201 22:02:42.233542  661844 provision.go:177] copyRemoteCerts
	I1201 22:02:42.233679  661844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 22:02:42.233744  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:42.259575  661844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/kubernetes-upgrade-738753/id_rsa Username:docker}
	I1201 22:02:42.372145  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 22:02:42.391261  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1201 22:02:42.413919  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 22:02:42.438820  661844 provision.go:87] duration metric: took 658.118492ms to configureAuth
	I1201 22:02:42.438900  661844 ubuntu.go:206] setting minikube options for container-runtime
	I1201 22:02:42.439124  661844 config.go:182] Loaded profile config "kubernetes-upgrade-738753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 22:02:42.439303  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:42.466328  661844 main.go:143] libmachine: Using SSH client type: native
	I1201 22:02:42.466651  661844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1201 22:02:42.466665  661844 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 22:02:42.841970  661844 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 22:02:42.841997  661844 machine.go:97] duration metric: took 4.659196378s to provisionDockerMachine
	I1201 22:02:42.842009  661844 start.go:293] postStartSetup for "kubernetes-upgrade-738753" (driver="docker")
	I1201 22:02:42.842033  661844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 22:02:42.842111  661844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 22:02:42.842166  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:42.862374  661844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/kubernetes-upgrade-738753/id_rsa Username:docker}
	I1201 22:02:42.976711  661844 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 22:02:42.987671  661844 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 22:02:42.987703  661844 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 22:02:42.987716  661844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 22:02:42.987773  661844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 22:02:42.987856  661844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 22:02:42.987968  661844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 22:02:42.999940  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 22:02:43.032972  661844 start.go:296] duration metric: took 190.937316ms for postStartSetup
	I1201 22:02:43.033053  661844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 22:02:43.033119  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:43.053244  661844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/kubernetes-upgrade-738753/id_rsa Username:docker}
	I1201 22:02:43.161114  661844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 22:02:43.167571  661844 fix.go:56] duration metric: took 5.723868733s for fixHost
	I1201 22:02:43.167627  661844 start.go:83] releasing machines lock for "kubernetes-upgrade-738753", held for 5.723940494s
	I1201 22:02:43.167705  661844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-738753
	I1201 22:02:43.185670  661844 ssh_runner.go:195] Run: cat /version.json
	I1201 22:02:43.185722  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:43.185753  661844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 22:02:43.185818  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:43.225030  661844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/kubernetes-upgrade-738753/id_rsa Username:docker}
	I1201 22:02:43.237280  661844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/kubernetes-upgrade-738753/id_rsa Username:docker}
	I1201 22:02:43.347394  661844 ssh_runner.go:195] Run: systemctl --version
	I1201 22:02:43.461441  661844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 22:02:43.508994  661844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 22:02:43.515231  661844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 22:02:43.515358  661844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 22:02:43.526177  661844 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 22:02:43.526251  661844 start.go:496] detecting cgroup driver to use...
	I1201 22:02:43.526307  661844 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 22:02:43.526388  661844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 22:02:43.546466  661844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 22:02:43.564391  661844 docker.go:218] disabling cri-docker service (if available) ...
	I1201 22:02:43.564502  661844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 22:02:43.582419  661844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 22:02:43.598117  661844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 22:02:43.758150  661844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 22:02:43.932791  661844 docker.go:234] disabling docker service ...
	I1201 22:02:43.932865  661844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 22:02:43.948917  661844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 22:02:43.964429  661844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 22:02:44.137733  661844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 22:02:44.326810  661844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 22:02:44.347098  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 22:02:44.361895  661844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 22:02:44.361967  661844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:02:44.371739  661844 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 22:02:44.371851  661844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:02:44.382515  661844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:02:44.392081  661844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:02:44.401857  661844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 22:02:44.411503  661844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:02:44.420994  661844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:02:44.430128  661844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:02:44.440884  661844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 22:02:44.449367  661844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 22:02:44.457805  661844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:02:44.618951  661844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 22:02:44.812359  661844 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 22:02:44.812450  661844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 22:02:44.817301  661844 start.go:564] Will wait 60s for crictl version
	I1201 22:02:44.817373  661844 ssh_runner.go:195] Run: which crictl
	I1201 22:02:44.822053  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 22:02:44.852254  661844 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 22:02:44.852354  661844 ssh_runner.go:195] Run: crio --version
	I1201 22:02:44.891592  661844 ssh_runner.go:195] Run: crio --version
	I1201 22:02:44.936293  661844 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 22:02:44.939236  661844 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-738753 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 22:02:44.961054  661844 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1201 22:02:44.967670  661844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 22:02:44.982488  661844 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-738753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-738753 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 22:02:44.982607  661844 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 22:02:44.982658  661844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 22:02:45.049408  661844 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1201 22:02:45.049438  661844 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1201 22:02:45.049520  661844 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 22:02:45.049973  661844 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 22:02:45.050136  661844 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 22:02:45.050256  661844 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 22:02:45.050357  661844 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 22:02:45.050449  661844 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1201 22:02:45.052658  661844 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1201 22:02:45.050536  661844 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1201 22:02:45.059662  661844 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 22:02:45.062384  661844 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 22:02:45.062677  661844 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1201 22:02:45.062724  661844 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 22:02:45.062821  661844 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 22:02:45.062907  661844 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1201 22:02:45.062981  661844 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 22:02:45.063511  661844 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1201 22:02:45.421404  661844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1201 22:02:45.434008  661844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 22:02:45.441684  661844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 22:02:45.465757  661844 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1201 22:02:45.465805  661844 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1201 22:02:45.465889  661844 ssh_runner.go:195] Run: which crictl
	I1201 22:02:45.466345  661844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 22:02:45.486796  661844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 22:02:45.499538  661844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1201 22:02:45.519978  661844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1201 22:02:45.633840  661844 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1201 22:02:45.633882  661844 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 22:02:45.634012  661844 ssh_runner.go:195] Run: which crictl
	I1201 22:02:45.647115  661844 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1201 22:02:45.647288  661844 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 22:02:45.647374  661844 ssh_runner.go:195] Run: which crictl
	I1201 22:02:45.723760  661844 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1201 22:02:45.723949  661844 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 22:02:45.724012  661844 ssh_runner.go:195] Run: which crictl
	I1201 22:02:45.723912  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1201 22:02:45.729751  661844 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1201 22:02:45.729928  661844 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 22:02:45.730001  661844 ssh_runner.go:195] Run: which crictl
	I1201 22:02:45.729870  661844 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1201 22:02:45.730118  661844 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1201 22:02:45.730181  661844 ssh_runner.go:195] Run: which crictl
	I1201 22:02:45.744955  661844 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1201 22:02:45.745207  661844 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1201 22:02:45.745269  661844 ssh_runner.go:195] Run: which crictl
	I1201 22:02:45.745122  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 22:02:45.745163  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 22:02:45.836520  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1201 22:02:45.836623  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 22:02:45.836715  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1201 22:02:45.836798  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 22:02:45.836840  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1201 22:02:45.836873  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 22:02:45.836897  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 22:02:46.038339  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 22:02:46.038521  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1201 22:02:46.038639  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 22:02:46.038805  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1201 22:02:46.038951  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1201 22:02:46.039074  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1201 22:02:46.039207  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1201 22:02:46.209565  661844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1201 22:02:46.209969  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1201 22:02:46.210010  661844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1201 22:02:46.210160  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1201 22:02:46.209765  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1201 22:02:46.209819  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1201 22:02:46.209849  661844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1201 22:02:46.209894  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1201 22:02:46.209659  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1201 22:02:46.210517  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1201 22:02:46.311469  661844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1201 22:02:46.311508  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1201 22:02:46.311556  661844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1201 22:02:46.311577  661844 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1201 22:02:46.311595  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1201 22:02:46.311625  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1201 22:02:46.311652  661844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1201 22:02:46.311725  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1201 22:02:46.311799  661844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1201 22:02:46.311844  661844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1201 22:02:46.311907  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1201 22:02:46.311959  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1201 22:02:46.311993  661844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1201 22:02:46.312049  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	W1201 22:02:46.326559  661844 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1201 22:02:46.326843  661844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 22:02:46.336061  661844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1201 22:02:46.336149  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1201 22:02:46.371229  661844 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1201 22:02:46.371328  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1201 22:02:46.371265  661844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1201 22:02:46.371470  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1201 22:02:46.371420  661844 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1201 22:02:46.371563  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1201 22:02:46.401516  661844 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1201 22:02:46.401590  661844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1201 22:02:46.425345  661844 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1201 22:02:46.425469  661844 retry.go:31] will retry after 153.919099ms: ssh: rejected: connect failed (open failed)
	W1201 22:02:46.425477  661844 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1201 22:02:46.425576  661844 retry.go:31] will retry after 313.514748ms: ssh: rejected: connect failed (open failed)
	I1201 22:02:46.580501  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:46.615256  661844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/kubernetes-upgrade-738753/id_rsa Username:docker}
	I1201 22:02:46.656561  661844 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1201 22:02:46.656602  661844 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 22:02:46.656667  661844 ssh_runner.go:195] Run: which crictl
	I1201 22:02:46.656731  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:46.694373  661844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/kubernetes-upgrade-738753/id_rsa Username:docker}
	I1201 22:02:46.740107  661844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738753
	I1201 22:02:46.782361  661844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/kubernetes-upgrade-738753/id_rsa Username:docker}
	I1201 22:02:47.132380  661844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 22:02:47.132437  661844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1201 22:02:47.435296  661844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1201 22:02:47.435419  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1201 22:02:47.541717  661844 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1201 22:02:47.541764  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1201 22:02:47.667871  661844 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1201 22:02:47.667943  661844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1201 22:02:50.089654  661844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.421683802s)
	I1201 22:02:50.089685  661844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1201 22:02:50.089706  661844 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1201 22:02:50.089758  661844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1201 22:02:51.952207  661844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.862421761s)
	I1201 22:02:51.952235  661844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1201 22:02:51.952254  661844 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1201 22:02:51.952325  661844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1201 22:02:54.281519  661844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (2.329173278s)
	I1201 22:02:54.281544  661844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1201 22:02:54.281561  661844 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1201 22:02:54.281609  661844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1201 22:02:56.200206  661844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.918571416s)
	I1201 22:02:56.200230  661844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1201 22:02:56.200249  661844 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1201 22:02:56.200298  661844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1201 22:02:58.386737  661844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (2.186418057s)
	I1201 22:02:58.386769  661844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1201 22:02:58.386791  661844 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1201 22:02:58.386841  661844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1201 22:02:59.212933  661844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1201 22:02:59.212977  661844 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1201 22:02:59.213037  661844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1201 22:03:02.373859  661844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (3.160795975s)
	I1201 22:03:02.373895  661844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-482752/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1201 22:03:02.373915  661844 cache_images.go:125] Successfully loaded all cached images
	I1201 22:03:02.373920  661844 cache_images.go:94] duration metric: took 17.324468676s to LoadCachedImages
	I1201 22:03:02.373928  661844 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 22:03:02.374042  661844 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-738753 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-738753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 22:03:02.374134  661844 ssh_runner.go:195] Run: crio config
	I1201 22:03:02.465724  661844 cni.go:84] Creating CNI manager for ""
	I1201 22:03:02.465750  661844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 22:03:02.465773  661844 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 22:03:02.465798  661844 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-738753 NodeName:kubernetes-upgrade-738753 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 22:03:02.465936  661844 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-738753"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 22:03:02.466016  661844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 22:03:02.476204  661844 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1201 22:03:02.476300  661844 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 22:03:02.491864  661844 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1201 22:03:02.491916  661844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:03:02.491869  661844 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1201 22:03:02.492029  661844 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1201 22:03:02.492055  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1201 22:03:02.492155  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1201 22:03:02.533336  661844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1201 22:03:02.533429  661844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1201 22:03:02.533442  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1201 22:03:02.533491  661844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1201 22:03:02.533502  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1201 22:03:02.549146  661844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1201 22:03:02.549245  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1201 22:03:03.749222  661844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 22:03:03.766424  661844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1201 22:03:03.793889  661844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 22:03:03.814563  661844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1201 22:03:03.839829  661844 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 22:03:03.846221  661844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 22:03:03.867460  661844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:03:04.113423  661844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 22:03:04.148957  661844 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753 for IP: 192.168.76.2
	I1201 22:03:04.148976  661844 certs.go:195] generating shared ca certs ...
	I1201 22:03:04.148993  661844 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:03:04.149136  661844 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 22:03:04.149179  661844 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 22:03:04.149188  661844 certs.go:257] generating profile certs ...
	I1201 22:03:04.149273  661844 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/client.key
	I1201 22:03:04.149348  661844 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/apiserver.key.3a8c92f5
	I1201 22:03:04.149389  661844 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/proxy-client.key
	I1201 22:03:04.149502  661844 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 22:03:04.149533  661844 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 22:03:04.149541  661844 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 22:03:04.149567  661844 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 22:03:04.149590  661844 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 22:03:04.149617  661844 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 22:03:04.149664  661844 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 22:03:04.150331  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 22:03:04.200693  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 22:03:04.248304  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 22:03:04.328961  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 22:03:04.358813  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1201 22:03:04.385594  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 22:03:04.407915  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 22:03:04.478758  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 22:03:04.510343  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 22:03:04.541234  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 22:03:04.567900  661844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 22:03:04.590729  661844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 22:03:04.604722  661844 ssh_runner.go:195] Run: openssl version
	I1201 22:03:04.611837  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 22:03:04.620902  661844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:03:04.625853  661844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:03:04.625931  661844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:03:04.670336  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 22:03:04.679944  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 22:03:04.689071  661844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 22:03:04.693715  661844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 22:03:04.693802  661844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 22:03:04.736331  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 22:03:04.745160  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 22:03:04.758784  661844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 22:03:04.763274  661844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 22:03:04.763358  661844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 22:03:04.806302  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 22:03:04.816482  661844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 22:03:04.821603  661844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 22:03:04.866130  661844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 22:03:04.919719  661844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 22:03:04.995877  661844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 22:03:05.050729  661844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 22:03:05.096116  661844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 22:03:05.137962  661844 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-738753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-738753 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:03:05.138048  661844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 22:03:05.138120  661844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 22:03:05.196356  661844 cri.go:89] found id: ""
	I1201 22:03:05.196428  661844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 22:03:05.207324  661844 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 22:03:05.207344  661844 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 22:03:05.207403  661844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 22:03:05.216930  661844 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 22:03:05.217481  661844 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-738753" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 22:03:05.217979  661844 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-482752/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-738753" cluster setting kubeconfig missing "kubernetes-upgrade-738753" context setting]
	I1201 22:03:05.219009  661844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:03:05.220669  661844 kapi.go:59] client config for kubernetes-upgrade-738753: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/kubernetes-upgrade-738753/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 22:03:05.221256  661844 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 22:03:05.221278  661844 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 22:03:05.221286  661844 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 22:03:05.221293  661844 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 22:03:05.221303  661844 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 22:03:05.221693  661844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 22:03:05.236678  661844 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-01 22:02:15.497082782 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-01 22:03:03.832064095 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-738753"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1201 22:03:05.236702  661844 kubeadm.go:1161] stopping kube-system containers ...
	I1201 22:03:05.236717  661844 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 22:03:05.236776  661844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 22:03:05.273982  661844 cri.go:89] found id: ""
	I1201 22:03:05.274060  661844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 22:03:05.290762  661844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 22:03:05.300174  661844 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec  1 22:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec  1 22:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  1 22:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  1 22:02 /etc/kubernetes/scheduler.conf
	
	I1201 22:03:05.300245  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 22:03:05.308701  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 22:03:05.317678  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 22:03:05.325862  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 22:03:05.325977  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 22:03:05.334878  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 22:03:05.343482  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 22:03:05.343604  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 22:03:05.351835  661844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 22:03:05.360332  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 22:03:05.407611  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 22:03:06.964055  661844 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.556406929s)
	I1201 22:03:06.964168  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 22:03:07.252304  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 22:03:07.328510  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 22:03:07.459042  661844 api_server.go:52] waiting for apiserver process to appear ...
	I1201 22:03:07.459440  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:07.959845  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:08.459316  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:08.961935  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:09.459990  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:09.960239  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:10.460197  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:10.960035  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:11.459252  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:11.960017  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:12.459301  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:12.960233  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:13.459282  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:13.960128  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:14.459275  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:14.960090  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:15.460098  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:15.960224  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:16.459257  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:16.959964  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:17.459844  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:17.959278  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:18.459838  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:18.959785  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:19.459931  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:19.959820  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:20.459279  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:20.959283  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:21.459255  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:21.960093  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:22.460133  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:22.959259  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:23.460081  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:23.959209  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:24.459480  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:24.959251  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:25.460203  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:25.959910  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:26.459972  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:26.959445  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:27.460184  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:27.959275  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:28.459318  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:28.959321  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:29.459286  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:29.959208  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:30.459985  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:30.959203  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:31.459996  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:31.959620  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:32.459272  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:32.959295  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:33.459207  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:33.959719  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:34.459279  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:34.960013  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:35.459551  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:35.959864  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:36.459237  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:36.959922  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:37.459285  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:37.959764  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:38.459713  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:38.959977  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:39.459343  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:39.959294  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:40.459301  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:40.960145  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:41.459322  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:41.959520  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:42.459288  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:42.959380  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:43.459258  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:43.959642  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:44.460024  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:44.960056  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:45.459802  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:45.960238  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:46.459317  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:46.959992  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:47.460114  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:47.960095  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:48.459302  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:48.959259  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:49.459813  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:49.960093  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:50.459292  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:50.960076  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:51.459842  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:51.959902  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:52.459304  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:52.960020  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:53.459257  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:53.959846  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:54.459787  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:54.959991  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:55.459274  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:55.959262  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:56.459261  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:56.960145  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:57.459264  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:57.959216  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:58.460152  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:58.959913  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:59.459982  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:03:59.960118  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:00.459590  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:00.959326  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:01.460090  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:01.960005  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:02.459312  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:02.959238  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:03.460140  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:03.960111  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:04.459311  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:04.960032  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:05.459883  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:05.959944  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:06.459292  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:06.959341  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:07.459357  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:07.459450  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:07.487999  661844 cri.go:89] found id: ""
	I1201 22:04:07.488026  661844 logs.go:282] 0 containers: []
	W1201 22:04:07.488036  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:07.488044  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:07.488115  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:07.516407  661844 cri.go:89] found id: ""
	I1201 22:04:07.516433  661844 logs.go:282] 0 containers: []
	W1201 22:04:07.516443  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:07.516450  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:07.516515  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:07.542584  661844 cri.go:89] found id: ""
	I1201 22:04:07.542614  661844 logs.go:282] 0 containers: []
	W1201 22:04:07.542624  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:07.542631  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:07.542697  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:07.569351  661844 cri.go:89] found id: ""
	I1201 22:04:07.569375  661844 logs.go:282] 0 containers: []
	W1201 22:04:07.569383  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:07.569390  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:07.569452  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:07.599614  661844 cri.go:89] found id: ""
	I1201 22:04:07.599635  661844 logs.go:282] 0 containers: []
	W1201 22:04:07.599643  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:07.599649  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:07.599711  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:07.632522  661844 cri.go:89] found id: ""
	I1201 22:04:07.632545  661844 logs.go:282] 0 containers: []
	W1201 22:04:07.632553  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:07.632560  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:07.632618  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:07.659771  661844 cri.go:89] found id: ""
	I1201 22:04:07.659799  661844 logs.go:282] 0 containers: []
	W1201 22:04:07.659808  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:07.659815  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:07.659875  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:07.688455  661844 cri.go:89] found id: ""
	I1201 22:04:07.688478  661844 logs.go:282] 0 containers: []
	W1201 22:04:07.688486  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:07.688496  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:07.688507  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:07.723528  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:07.723558  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:07.794433  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:07.794473  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:07.813989  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:07.814019  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:07.880800  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:07.880826  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:07.880841  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:10.423913  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:10.434937  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:10.435007  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:10.468309  661844 cri.go:89] found id: ""
	I1201 22:04:10.468334  661844 logs.go:282] 0 containers: []
	W1201 22:04:10.468343  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:10.468350  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:10.468427  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:10.495751  661844 cri.go:89] found id: ""
	I1201 22:04:10.495774  661844 logs.go:282] 0 containers: []
	W1201 22:04:10.495783  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:10.495790  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:10.495849  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:10.523206  661844 cri.go:89] found id: ""
	I1201 22:04:10.523231  661844 logs.go:282] 0 containers: []
	W1201 22:04:10.523241  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:10.523248  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:10.523312  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:10.553060  661844 cri.go:89] found id: ""
	I1201 22:04:10.553087  661844 logs.go:282] 0 containers: []
	W1201 22:04:10.553101  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:10.553109  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:10.553170  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:10.582868  661844 cri.go:89] found id: ""
	I1201 22:04:10.582892  661844 logs.go:282] 0 containers: []
	W1201 22:04:10.582901  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:10.582908  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:10.582968  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:10.617991  661844 cri.go:89] found id: ""
	I1201 22:04:10.618017  661844 logs.go:282] 0 containers: []
	W1201 22:04:10.618027  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:10.618034  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:10.618095  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:10.645968  661844 cri.go:89] found id: ""
	I1201 22:04:10.645995  661844 logs.go:282] 0 containers: []
	W1201 22:04:10.646005  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:10.646012  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:10.646074  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:10.675396  661844 cri.go:89] found id: ""
	I1201 22:04:10.675427  661844 logs.go:282] 0 containers: []
	W1201 22:04:10.675445  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:10.675455  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:10.675482  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:10.693098  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:10.693128  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:10.763104  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:10.763124  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:10.763159  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:10.803302  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:10.803338  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:10.837824  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:10.837856  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:13.406839  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:13.418045  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:13.418173  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:13.446910  661844 cri.go:89] found id: ""
	I1201 22:04:13.446947  661844 logs.go:282] 0 containers: []
	W1201 22:04:13.446957  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:13.446964  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:13.447026  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:13.474230  661844 cri.go:89] found id: ""
	I1201 22:04:13.474265  661844 logs.go:282] 0 containers: []
	W1201 22:04:13.474274  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:13.474300  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:13.474398  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:13.501877  661844 cri.go:89] found id: ""
	I1201 22:04:13.501910  661844 logs.go:282] 0 containers: []
	W1201 22:04:13.501919  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:13.501926  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:13.502011  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:13.529299  661844 cri.go:89] found id: ""
	I1201 22:04:13.529330  661844 logs.go:282] 0 containers: []
	W1201 22:04:13.529340  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:13.529347  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:13.529424  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:13.557498  661844 cri.go:89] found id: ""
	I1201 22:04:13.557524  661844 logs.go:282] 0 containers: []
	W1201 22:04:13.557534  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:13.557541  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:13.557607  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:13.585554  661844 cri.go:89] found id: ""
	I1201 22:04:13.585583  661844 logs.go:282] 0 containers: []
	W1201 22:04:13.585594  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:13.585602  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:13.585670  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:13.614205  661844 cri.go:89] found id: ""
	I1201 22:04:13.614231  661844 logs.go:282] 0 containers: []
	W1201 22:04:13.614240  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:13.614246  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:13.614307  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:13.641788  661844 cri.go:89] found id: ""
	I1201 22:04:13.641813  661844 logs.go:282] 0 containers: []
	W1201 22:04:13.641823  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:13.641832  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:13.641845  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:13.683084  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:13.683124  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:13.714397  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:13.714431  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:13.788889  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:13.788923  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:13.806782  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:13.806820  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:13.876759  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:16.377241  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:16.391702  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:16.391773  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:16.423105  661844 cri.go:89] found id: ""
	I1201 22:04:16.423159  661844 logs.go:282] 0 containers: []
	W1201 22:04:16.423169  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:16.423177  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:16.423277  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:16.452221  661844 cri.go:89] found id: ""
	I1201 22:04:16.452248  661844 logs.go:282] 0 containers: []
	W1201 22:04:16.452258  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:16.452266  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:16.452371  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:16.483851  661844 cri.go:89] found id: ""
	I1201 22:04:16.483879  661844 logs.go:282] 0 containers: []
	W1201 22:04:16.483891  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:16.483898  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:16.483964  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:16.512587  661844 cri.go:89] found id: ""
	I1201 22:04:16.512655  661844 logs.go:282] 0 containers: []
	W1201 22:04:16.512672  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:16.512680  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:16.512751  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:16.539244  661844 cri.go:89] found id: ""
	I1201 22:04:16.539268  661844 logs.go:282] 0 containers: []
	W1201 22:04:16.539277  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:16.539288  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:16.539360  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:16.568226  661844 cri.go:89] found id: ""
	I1201 22:04:16.568252  661844 logs.go:282] 0 containers: []
	W1201 22:04:16.568262  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:16.568270  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:16.568361  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:16.596934  661844 cri.go:89] found id: ""
	I1201 22:04:16.596960  661844 logs.go:282] 0 containers: []
	W1201 22:04:16.596969  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:16.596976  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:16.597068  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:16.623947  661844 cri.go:89] found id: ""
	I1201 22:04:16.623971  661844 logs.go:282] 0 containers: []
	W1201 22:04:16.623981  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:16.623991  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:16.624033  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:16.691970  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:16.692059  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:16.714604  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:16.714633  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:16.786874  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:16.786898  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:16.786922  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:16.827746  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:16.827783  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:19.357752  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:19.369080  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:19.369148  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:19.398179  661844 cri.go:89] found id: ""
	I1201 22:04:19.398202  661844 logs.go:282] 0 containers: []
	W1201 22:04:19.398211  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:19.398217  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:19.398278  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:19.426226  661844 cri.go:89] found id: ""
	I1201 22:04:19.426250  661844 logs.go:282] 0 containers: []
	W1201 22:04:19.426258  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:19.426265  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:19.426332  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:19.454332  661844 cri.go:89] found id: ""
	I1201 22:04:19.454404  661844 logs.go:282] 0 containers: []
	W1201 22:04:19.454429  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:19.454450  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:19.454547  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:19.483653  661844 cri.go:89] found id: ""
	I1201 22:04:19.483679  661844 logs.go:282] 0 containers: []
	W1201 22:04:19.483689  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:19.483696  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:19.483760  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:19.509887  661844 cri.go:89] found id: ""
	I1201 22:04:19.509974  661844 logs.go:282] 0 containers: []
	W1201 22:04:19.509999  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:19.510021  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:19.510109  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:19.537202  661844 cri.go:89] found id: ""
	I1201 22:04:19.537227  661844 logs.go:282] 0 containers: []
	W1201 22:04:19.537267  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:19.537279  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:19.537354  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:19.567379  661844 cri.go:89] found id: ""
	I1201 22:04:19.567406  661844 logs.go:282] 0 containers: []
	W1201 22:04:19.567415  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:19.567422  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:19.567488  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:19.594436  661844 cri.go:89] found id: ""
	I1201 22:04:19.594458  661844 logs.go:282] 0 containers: []
	W1201 22:04:19.594466  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:19.594476  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:19.594488  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:19.630208  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:19.630239  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:19.696553  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:19.696587  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:19.714255  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:19.714285  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:19.782447  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:19.782469  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:19.782483  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:22.323424  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:22.334579  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:22.334648  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:22.360926  661844 cri.go:89] found id: ""
	I1201 22:04:22.360950  661844 logs.go:282] 0 containers: []
	W1201 22:04:22.360958  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:22.360965  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:22.361024  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:22.389212  661844 cri.go:89] found id: ""
	I1201 22:04:22.389247  661844 logs.go:282] 0 containers: []
	W1201 22:04:22.389257  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:22.389263  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:22.389334  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:22.417485  661844 cri.go:89] found id: ""
	I1201 22:04:22.417564  661844 logs.go:282] 0 containers: []
	W1201 22:04:22.417584  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:22.417593  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:22.417679  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:22.448787  661844 cri.go:89] found id: ""
	I1201 22:04:22.448808  661844 logs.go:282] 0 containers: []
	W1201 22:04:22.448823  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:22.448830  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:22.448892  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:22.475713  661844 cri.go:89] found id: ""
	I1201 22:04:22.475741  661844 logs.go:282] 0 containers: []
	W1201 22:04:22.475750  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:22.475757  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:22.475842  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:22.502431  661844 cri.go:89] found id: ""
	I1201 22:04:22.502498  661844 logs.go:282] 0 containers: []
	W1201 22:04:22.502521  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:22.502543  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:22.502609  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:22.528588  661844 cri.go:89] found id: ""
	I1201 22:04:22.528612  661844 logs.go:282] 0 containers: []
	W1201 22:04:22.528620  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:22.528627  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:22.528688  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:22.554416  661844 cri.go:89] found id: ""
	I1201 22:04:22.554455  661844 logs.go:282] 0 containers: []
	W1201 22:04:22.554465  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:22.554474  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:22.554487  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:22.583694  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:22.583721  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:22.650528  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:22.650565  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:22.667421  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:22.667502  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:22.738003  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:22.738082  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:22.738105  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:25.279599  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:25.290954  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:25.291029  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:25.318937  661844 cri.go:89] found id: ""
	I1201 22:04:25.318958  661844 logs.go:282] 0 containers: []
	W1201 22:04:25.318967  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:25.318973  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:25.319036  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:25.345626  661844 cri.go:89] found id: ""
	I1201 22:04:25.345648  661844 logs.go:282] 0 containers: []
	W1201 22:04:25.345656  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:25.345662  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:25.345730  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:25.375770  661844 cri.go:89] found id: ""
	I1201 22:04:25.375793  661844 logs.go:282] 0 containers: []
	W1201 22:04:25.375802  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:25.375809  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:25.375871  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:25.401615  661844 cri.go:89] found id: ""
	I1201 22:04:25.401638  661844 logs.go:282] 0 containers: []
	W1201 22:04:25.401646  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:25.401653  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:25.401715  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:25.429978  661844 cri.go:89] found id: ""
	I1201 22:04:25.430000  661844 logs.go:282] 0 containers: []
	W1201 22:04:25.430009  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:25.430015  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:25.430079  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:25.456386  661844 cri.go:89] found id: ""
	I1201 22:04:25.456409  661844 logs.go:282] 0 containers: []
	W1201 22:04:25.456418  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:25.456424  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:25.456484  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:25.487711  661844 cri.go:89] found id: ""
	I1201 22:04:25.487789  661844 logs.go:282] 0 containers: []
	W1201 22:04:25.487804  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:25.487811  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:25.487880  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:25.513535  661844 cri.go:89] found id: ""
	I1201 22:04:25.513561  661844 logs.go:282] 0 containers: []
	W1201 22:04:25.513570  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:25.513579  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:25.513591  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:25.580717  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:25.580757  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:25.598405  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:25.598435  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:25.667201  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:25.667230  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:25.667244  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:25.707825  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:25.707863  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:28.237593  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:28.248666  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:28.248741  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:28.275329  661844 cri.go:89] found id: ""
	I1201 22:04:28.275365  661844 logs.go:282] 0 containers: []
	W1201 22:04:28.275374  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:28.275381  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:28.275457  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:28.302782  661844 cri.go:89] found id: ""
	I1201 22:04:28.302809  661844 logs.go:282] 0 containers: []
	W1201 22:04:28.302825  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:28.302845  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:28.302908  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:28.330283  661844 cri.go:89] found id: ""
	I1201 22:04:28.330327  661844 logs.go:282] 0 containers: []
	W1201 22:04:28.330338  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:28.330363  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:28.330454  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:28.366344  661844 cri.go:89] found id: ""
	I1201 22:04:28.366367  661844 logs.go:282] 0 containers: []
	W1201 22:04:28.366375  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:28.366382  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:28.366440  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:28.391207  661844 cri.go:89] found id: ""
	I1201 22:04:28.391231  661844 logs.go:282] 0 containers: []
	W1201 22:04:28.391240  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:28.391246  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:28.391326  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:28.419621  661844 cri.go:89] found id: ""
	I1201 22:04:28.419642  661844 logs.go:282] 0 containers: []
	W1201 22:04:28.419650  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:28.419656  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:28.419718  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:28.447050  661844 cri.go:89] found id: ""
	I1201 22:04:28.447126  661844 logs.go:282] 0 containers: []
	W1201 22:04:28.447168  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:28.447184  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:28.447262  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:28.474496  661844 cri.go:89] found id: ""
	I1201 22:04:28.474525  661844 logs.go:282] 0 containers: []
	W1201 22:04:28.474579  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:28.474598  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:28.474612  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:28.545871  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:28.545908  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:28.562722  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:28.562752  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:28.634969  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:28.635039  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:28.635061  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:28.679007  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:28.679050  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:31.211304  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:31.223977  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:31.224044  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:31.251599  661844 cri.go:89] found id: ""
	I1201 22:04:31.251627  661844 logs.go:282] 0 containers: []
	W1201 22:04:31.251637  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:31.251644  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:31.251711  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:31.279450  661844 cri.go:89] found id: ""
	I1201 22:04:31.279475  661844 logs.go:282] 0 containers: []
	W1201 22:04:31.279484  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:31.279492  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:31.279559  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:31.307937  661844 cri.go:89] found id: ""
	I1201 22:04:31.307960  661844 logs.go:282] 0 containers: []
	W1201 22:04:31.307969  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:31.307975  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:31.308036  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:31.335096  661844 cri.go:89] found id: ""
	I1201 22:04:31.335123  661844 logs.go:282] 0 containers: []
	W1201 22:04:31.335154  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:31.335162  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:31.335225  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:31.369386  661844 cri.go:89] found id: ""
	I1201 22:04:31.369410  661844 logs.go:282] 0 containers: []
	W1201 22:04:31.369418  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:31.369424  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:31.369491  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:31.395361  661844 cri.go:89] found id: ""
	I1201 22:04:31.395383  661844 logs.go:282] 0 containers: []
	W1201 22:04:31.395392  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:31.395399  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:31.395491  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:31.427005  661844 cri.go:89] found id: ""
	I1201 22:04:31.427031  661844 logs.go:282] 0 containers: []
	W1201 22:04:31.427040  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:31.427046  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:31.427110  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:31.453954  661844 cri.go:89] found id: ""
	I1201 22:04:31.454002  661844 logs.go:282] 0 containers: []
	W1201 22:04:31.454012  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:31.454021  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:31.454033  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:31.471260  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:31.471293  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:31.539657  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:31.539682  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:31.539696  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:31.580450  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:31.580484  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:31.615299  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:31.615328  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:34.184392  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:34.198170  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:34.198244  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:34.230931  661844 cri.go:89] found id: ""
	I1201 22:04:34.230955  661844 logs.go:282] 0 containers: []
	W1201 22:04:34.230970  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:34.230976  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:34.231037  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:34.268408  661844 cri.go:89] found id: ""
	I1201 22:04:34.268431  661844 logs.go:282] 0 containers: []
	W1201 22:04:34.268441  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:34.268447  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:34.268510  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:34.308619  661844 cri.go:89] found id: ""
	I1201 22:04:34.308643  661844 logs.go:282] 0 containers: []
	W1201 22:04:34.308653  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:34.308660  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:34.308727  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:34.338952  661844 cri.go:89] found id: ""
	I1201 22:04:34.338976  661844 logs.go:282] 0 containers: []
	W1201 22:04:34.338985  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:34.338992  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:34.339055  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:34.366377  661844 cri.go:89] found id: ""
	I1201 22:04:34.366400  661844 logs.go:282] 0 containers: []
	W1201 22:04:34.366409  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:34.366415  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:34.366478  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:34.393789  661844 cri.go:89] found id: ""
	I1201 22:04:34.393866  661844 logs.go:282] 0 containers: []
	W1201 22:04:34.393889  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:34.393909  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:34.393983  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:34.420643  661844 cri.go:89] found id: ""
	I1201 22:04:34.420672  661844 logs.go:282] 0 containers: []
	W1201 22:04:34.420682  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:34.420689  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:34.420752  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:34.447427  661844 cri.go:89] found id: ""
	I1201 22:04:34.447458  661844 logs.go:282] 0 containers: []
	W1201 22:04:34.447468  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:34.447479  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:34.447491  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:34.515851  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:34.515887  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:34.532662  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:34.532692  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:34.598120  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:34.598144  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:34.598174  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:34.638812  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:34.638851  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:37.170939  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:37.183080  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:37.183175  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:37.224670  661844 cri.go:89] found id: ""
	I1201 22:04:37.224697  661844 logs.go:282] 0 containers: []
	W1201 22:04:37.224706  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:37.224712  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:37.224779  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:37.259502  661844 cri.go:89] found id: ""
	I1201 22:04:37.259529  661844 logs.go:282] 0 containers: []
	W1201 22:04:37.259540  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:37.259548  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:37.259618  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:37.290711  661844 cri.go:89] found id: ""
	I1201 22:04:37.290736  661844 logs.go:282] 0 containers: []
	W1201 22:04:37.290745  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:37.290752  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:37.290817  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:37.319899  661844 cri.go:89] found id: ""
	I1201 22:04:37.319924  661844 logs.go:282] 0 containers: []
	W1201 22:04:37.319933  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:37.319940  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:37.320000  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:37.347191  661844 cri.go:89] found id: ""
	I1201 22:04:37.347220  661844 logs.go:282] 0 containers: []
	W1201 22:04:37.347229  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:37.347236  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:37.347298  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:37.374365  661844 cri.go:89] found id: ""
	I1201 22:04:37.374393  661844 logs.go:282] 0 containers: []
	W1201 22:04:37.374403  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:37.374414  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:37.374498  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:37.400879  661844 cri.go:89] found id: ""
	I1201 22:04:37.400905  661844 logs.go:282] 0 containers: []
	W1201 22:04:37.400914  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:37.400921  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:37.400984  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:37.427606  661844 cri.go:89] found id: ""
	I1201 22:04:37.427643  661844 logs.go:282] 0 containers: []
	W1201 22:04:37.427652  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:37.427661  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:37.427673  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:37.495655  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:37.495695  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:37.513159  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:37.513189  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:37.584855  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:37.584880  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:37.584896  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:37.626903  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:37.626935  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:40.160505  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:40.184427  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:40.184506  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:40.252667  661844 cri.go:89] found id: ""
	I1201 22:04:40.252691  661844 logs.go:282] 0 containers: []
	W1201 22:04:40.252701  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:40.252709  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:40.252778  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:40.287545  661844 cri.go:89] found id: ""
	I1201 22:04:40.287569  661844 logs.go:282] 0 containers: []
	W1201 22:04:40.287578  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:40.287585  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:40.287651  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:40.322651  661844 cri.go:89] found id: ""
	I1201 22:04:40.322674  661844 logs.go:282] 0 containers: []
	W1201 22:04:40.322682  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:40.322689  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:40.322746  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:40.355989  661844 cri.go:89] found id: ""
	I1201 22:04:40.356010  661844 logs.go:282] 0 containers: []
	W1201 22:04:40.356019  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:40.356027  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:40.356088  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:40.383420  661844 cri.go:89] found id: ""
	I1201 22:04:40.383501  661844 logs.go:282] 0 containers: []
	W1201 22:04:40.383532  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:40.383554  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:40.383673  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:40.412511  661844 cri.go:89] found id: ""
	I1201 22:04:40.412531  661844 logs.go:282] 0 containers: []
	W1201 22:04:40.412540  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:40.412547  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:40.412613  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:40.448743  661844 cri.go:89] found id: ""
	I1201 22:04:40.448763  661844 logs.go:282] 0 containers: []
	W1201 22:04:40.448771  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:40.448778  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:40.448876  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:40.479964  661844 cri.go:89] found id: ""
	I1201 22:04:40.479987  661844 logs.go:282] 0 containers: []
	W1201 22:04:40.479997  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:40.480007  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:40.480020  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:40.502498  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:40.502596  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:40.602994  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:40.603017  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:40.603033  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:40.648298  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:40.648336  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:40.677282  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:40.677310  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:43.248816  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:43.260472  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:43.260550  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:43.289626  661844 cri.go:89] found id: ""
	I1201 22:04:43.289650  661844 logs.go:282] 0 containers: []
	W1201 22:04:43.289659  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:43.289667  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:43.289734  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:43.316986  661844 cri.go:89] found id: ""
	I1201 22:04:43.317013  661844 logs.go:282] 0 containers: []
	W1201 22:04:43.317023  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:43.317030  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:43.317098  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:43.344464  661844 cri.go:89] found id: ""
	I1201 22:04:43.344487  661844 logs.go:282] 0 containers: []
	W1201 22:04:43.344496  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:43.344508  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:43.344571  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:43.373891  661844 cri.go:89] found id: ""
	I1201 22:04:43.373917  661844 logs.go:282] 0 containers: []
	W1201 22:04:43.373926  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:43.373932  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:43.373993  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:43.399392  661844 cri.go:89] found id: ""
	I1201 22:04:43.399419  661844 logs.go:282] 0 containers: []
	W1201 22:04:43.399428  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:43.399436  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:43.399497  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:43.424795  661844 cri.go:89] found id: ""
	I1201 22:04:43.424823  661844 logs.go:282] 0 containers: []
	W1201 22:04:43.424832  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:43.424839  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:43.424901  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:43.453886  661844 cri.go:89] found id: ""
	I1201 22:04:43.453910  661844 logs.go:282] 0 containers: []
	W1201 22:04:43.453919  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:43.453925  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:43.453984  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:43.478867  661844 cri.go:89] found id: ""
	I1201 22:04:43.478893  661844 logs.go:282] 0 containers: []
	W1201 22:04:43.478903  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:43.478912  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:43.478926  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:43.545302  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:43.545340  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:43.563150  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:43.563186  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:43.639447  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:43.639470  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:43.639483  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:43.680809  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:43.680840  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:46.213153  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:46.224632  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:46.224705  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:46.252591  661844 cri.go:89] found id: ""
	I1201 22:04:46.252613  661844 logs.go:282] 0 containers: []
	W1201 22:04:46.252621  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:46.252627  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:46.252691  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:46.279126  661844 cri.go:89] found id: ""
	I1201 22:04:46.279188  661844 logs.go:282] 0 containers: []
	W1201 22:04:46.279197  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:46.279204  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:46.279264  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:46.306015  661844 cri.go:89] found id: ""
	I1201 22:04:46.306087  661844 logs.go:282] 0 containers: []
	W1201 22:04:46.306110  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:46.306132  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:46.306223  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:46.332178  661844 cri.go:89] found id: ""
	I1201 22:04:46.332200  661844 logs.go:282] 0 containers: []
	W1201 22:04:46.332208  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:46.332216  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:46.332300  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:46.359303  661844 cri.go:89] found id: ""
	I1201 22:04:46.359336  661844 logs.go:282] 0 containers: []
	W1201 22:04:46.359345  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:46.359368  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:46.359454  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:46.385866  661844 cri.go:89] found id: ""
	I1201 22:04:46.385900  661844 logs.go:282] 0 containers: []
	W1201 22:04:46.385910  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:46.385933  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:46.386024  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:46.414245  661844 cri.go:89] found id: ""
	I1201 22:04:46.414288  661844 logs.go:282] 0 containers: []
	W1201 22:04:46.414298  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:46.414305  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:46.414373  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:46.447786  661844 cri.go:89] found id: ""
	I1201 22:04:46.447862  661844 logs.go:282] 0 containers: []
	W1201 22:04:46.447880  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:46.447890  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:46.447906  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:46.487691  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:46.487730  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:46.518009  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:46.518043  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:46.585117  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:46.585155  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:46.603483  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:46.603514  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:46.674101  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:49.175263  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:49.199641  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:49.199714  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:49.254801  661844 cri.go:89] found id: ""
	I1201 22:04:49.254827  661844 logs.go:282] 0 containers: []
	W1201 22:04:49.254835  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:49.254852  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:49.254919  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:49.287983  661844 cri.go:89] found id: ""
	I1201 22:04:49.288009  661844 logs.go:282] 0 containers: []
	W1201 22:04:49.288018  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:49.288026  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:49.288088  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:49.329196  661844 cri.go:89] found id: ""
	I1201 22:04:49.329221  661844 logs.go:282] 0 containers: []
	W1201 22:04:49.329230  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:49.329236  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:49.329303  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:49.365150  661844 cri.go:89] found id: ""
	I1201 22:04:49.365176  661844 logs.go:282] 0 containers: []
	W1201 22:04:49.365185  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:49.365191  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:49.365254  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:49.409548  661844 cri.go:89] found id: ""
	I1201 22:04:49.409576  661844 logs.go:282] 0 containers: []
	W1201 22:04:49.409585  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:49.409591  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:49.409652  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:49.441580  661844 cri.go:89] found id: ""
	I1201 22:04:49.441607  661844 logs.go:282] 0 containers: []
	W1201 22:04:49.441616  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:49.441623  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:49.441687  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:49.482021  661844 cri.go:89] found id: ""
	I1201 22:04:49.482047  661844 logs.go:282] 0 containers: []
	W1201 22:04:49.482056  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:49.482062  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:49.482128  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:49.529766  661844 cri.go:89] found id: ""
	I1201 22:04:49.529792  661844 logs.go:282] 0 containers: []
	W1201 22:04:49.529802  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:49.529811  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:49.529823  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:49.602087  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:49.602123  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:49.620168  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:49.620203  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:49.692020  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:49.692039  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:49.692060  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:49.735191  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:49.735232  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:52.266334  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:52.277616  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:52.277738  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:52.304583  661844 cri.go:89] found id: ""
	I1201 22:04:52.304604  661844 logs.go:282] 0 containers: []
	W1201 22:04:52.304612  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:52.304621  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:52.304680  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:52.334351  661844 cri.go:89] found id: ""
	I1201 22:04:52.334388  661844 logs.go:282] 0 containers: []
	W1201 22:04:52.334402  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:52.334410  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:52.334495  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:52.367195  661844 cri.go:89] found id: ""
	I1201 22:04:52.367223  661844 logs.go:282] 0 containers: []
	W1201 22:04:52.367233  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:52.367242  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:52.367313  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:52.398699  661844 cri.go:89] found id: ""
	I1201 22:04:52.398777  661844 logs.go:282] 0 containers: []
	W1201 22:04:52.398802  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:52.398822  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:52.398926  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:52.429976  661844 cri.go:89] found id: ""
	I1201 22:04:52.430013  661844 logs.go:282] 0 containers: []
	W1201 22:04:52.430032  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:52.430040  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:52.430125  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:52.464619  661844 cri.go:89] found id: ""
	I1201 22:04:52.464645  661844 logs.go:282] 0 containers: []
	W1201 22:04:52.464657  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:52.464664  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:52.464725  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:52.491912  661844 cri.go:89] found id: ""
	I1201 22:04:52.491935  661844 logs.go:282] 0 containers: []
	W1201 22:04:52.491944  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:52.491950  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:52.492006  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:52.519057  661844 cri.go:89] found id: ""
	I1201 22:04:52.519084  661844 logs.go:282] 0 containers: []
	W1201 22:04:52.519093  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:52.519102  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:52.519114  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:52.585502  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:52.585537  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:52.603390  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:52.603422  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:52.674330  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:52.674352  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:52.674366  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:52.715164  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:52.715198  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:55.253473  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:55.265007  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:55.265077  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:55.292725  661844 cri.go:89] found id: ""
	I1201 22:04:55.292748  661844 logs.go:282] 0 containers: []
	W1201 22:04:55.292757  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:55.292764  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:55.292822  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:55.321355  661844 cri.go:89] found id: ""
	I1201 22:04:55.321378  661844 logs.go:282] 0 containers: []
	W1201 22:04:55.321387  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:55.321394  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:55.321459  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:55.353024  661844 cri.go:89] found id: ""
	I1201 22:04:55.353052  661844 logs.go:282] 0 containers: []
	W1201 22:04:55.353061  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:55.353068  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:55.353129  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:55.379002  661844 cri.go:89] found id: ""
	I1201 22:04:55.379030  661844 logs.go:282] 0 containers: []
	W1201 22:04:55.379039  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:55.379045  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:55.379105  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:55.406968  661844 cri.go:89] found id: ""
	I1201 22:04:55.406995  661844 logs.go:282] 0 containers: []
	W1201 22:04:55.407005  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:55.407012  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:55.407074  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:55.437923  661844 cri.go:89] found id: ""
	I1201 22:04:55.437945  661844 logs.go:282] 0 containers: []
	W1201 22:04:55.437954  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:55.437960  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:55.438020  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:55.464990  661844 cri.go:89] found id: ""
	I1201 22:04:55.465011  661844 logs.go:282] 0 containers: []
	W1201 22:04:55.465020  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:55.465081  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:55.465156  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:55.490238  661844 cri.go:89] found id: ""
	I1201 22:04:55.490259  661844 logs.go:282] 0 containers: []
	W1201 22:04:55.490267  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:55.490276  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:55.490287  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:55.556063  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:55.556101  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:55.556122  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:55.596553  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:55.596589  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:55.624642  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:55.624667  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:55.691794  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:55.691833  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:04:58.210562  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:04:58.221913  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:04:58.221994  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:04:58.250662  661844 cri.go:89] found id: ""
	I1201 22:04:58.250686  661844 logs.go:282] 0 containers: []
	W1201 22:04:58.250695  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:04:58.250702  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:04:58.250771  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:04:58.277722  661844 cri.go:89] found id: ""
	I1201 22:04:58.277746  661844 logs.go:282] 0 containers: []
	W1201 22:04:58.277755  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:04:58.277762  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:04:58.277835  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:04:58.307268  661844 cri.go:89] found id: ""
	I1201 22:04:58.307290  661844 logs.go:282] 0 containers: []
	W1201 22:04:58.307299  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:04:58.307306  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:04:58.307363  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:04:58.335718  661844 cri.go:89] found id: ""
	I1201 22:04:58.335794  661844 logs.go:282] 0 containers: []
	W1201 22:04:58.335820  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:04:58.335827  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:04:58.335901  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:04:58.362180  661844 cri.go:89] found id: ""
	I1201 22:04:58.362205  661844 logs.go:282] 0 containers: []
	W1201 22:04:58.362214  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:04:58.362221  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:04:58.362289  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:04:58.388476  661844 cri.go:89] found id: ""
	I1201 22:04:58.388499  661844 logs.go:282] 0 containers: []
	W1201 22:04:58.388508  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:04:58.388515  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:04:58.388575  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:04:58.414265  661844 cri.go:89] found id: ""
	I1201 22:04:58.414293  661844 logs.go:282] 0 containers: []
	W1201 22:04:58.414302  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:04:58.414309  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:04:58.414373  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:04:58.445798  661844 cri.go:89] found id: ""
	I1201 22:04:58.445830  661844 logs.go:282] 0 containers: []
	W1201 22:04:58.445839  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:04:58.445848  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:04:58.445863  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:04:58.521208  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:04:58.521231  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:04:58.521246  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:04:58.564273  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:04:58.564317  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:04:58.598760  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:04:58.598789  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:04:58.665673  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:04:58.665713  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:01.183303  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:01.198867  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:01.198951  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:01.234426  661844 cri.go:89] found id: ""
	I1201 22:05:01.234456  661844 logs.go:282] 0 containers: []
	W1201 22:05:01.234465  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:01.234472  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:01.234534  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:01.264290  661844 cri.go:89] found id: ""
	I1201 22:05:01.264360  661844 logs.go:282] 0 containers: []
	W1201 22:05:01.264385  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:01.264406  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:01.264496  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:01.294958  661844 cri.go:89] found id: ""
	I1201 22:05:01.294989  661844 logs.go:282] 0 containers: []
	W1201 22:05:01.294998  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:01.295006  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:01.295071  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:01.323265  661844 cri.go:89] found id: ""
	I1201 22:05:01.323354  661844 logs.go:282] 0 containers: []
	W1201 22:05:01.323370  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:01.323378  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:01.323458  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:01.351357  661844 cri.go:89] found id: ""
	I1201 22:05:01.351384  661844 logs.go:282] 0 containers: []
	W1201 22:05:01.351394  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:01.351401  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:01.351466  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:01.381370  661844 cri.go:89] found id: ""
	I1201 22:05:01.381395  661844 logs.go:282] 0 containers: []
	W1201 22:05:01.381404  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:01.381411  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:01.381472  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:01.409809  661844 cri.go:89] found id: ""
	I1201 22:05:01.409838  661844 logs.go:282] 0 containers: []
	W1201 22:05:01.409851  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:01.409859  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:01.409927  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:01.438431  661844 cri.go:89] found id: ""
	I1201 22:05:01.438467  661844 logs.go:282] 0 containers: []
	W1201 22:05:01.438479  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:01.438490  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:01.438503  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:01.506340  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:01.506378  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:01.523320  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:01.523354  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:01.597061  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:01.597082  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:01.597095  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:01.639195  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:01.639232  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:04.175683  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:04.187844  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:04.187913  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:04.221568  661844 cri.go:89] found id: ""
	I1201 22:05:04.221591  661844 logs.go:282] 0 containers: []
	W1201 22:05:04.221600  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:04.221607  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:04.221667  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:04.249774  661844 cri.go:89] found id: ""
	I1201 22:05:04.249797  661844 logs.go:282] 0 containers: []
	W1201 22:05:04.249806  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:04.249813  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:04.249874  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:04.282479  661844 cri.go:89] found id: ""
	I1201 22:05:04.282506  661844 logs.go:282] 0 containers: []
	W1201 22:05:04.282515  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:04.282522  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:04.282598  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:04.319802  661844 cri.go:89] found id: ""
	I1201 22:05:04.319850  661844 logs.go:282] 0 containers: []
	W1201 22:05:04.319860  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:04.319867  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:04.319931  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:04.348791  661844 cri.go:89] found id: ""
	I1201 22:05:04.348858  661844 logs.go:282] 0 containers: []
	W1201 22:05:04.348882  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:04.348902  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:04.348990  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:04.374939  661844 cri.go:89] found id: ""
	I1201 22:05:04.375007  661844 logs.go:282] 0 containers: []
	W1201 22:05:04.375036  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:04.375056  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:04.375200  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:04.406162  661844 cri.go:89] found id: ""
	I1201 22:05:04.406230  661844 logs.go:282] 0 containers: []
	W1201 22:05:04.406252  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:04.406272  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:04.406361  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:04.432919  661844 cri.go:89] found id: ""
	I1201 22:05:04.432944  661844 logs.go:282] 0 containers: []
	W1201 22:05:04.432952  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:04.432961  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:04.432992  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:04.497574  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:04.497649  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:04.497671  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:04.539259  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:04.539292  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:04.568827  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:04.568860  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:04.638351  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:04.638386  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:07.160359  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:07.172867  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:07.172942  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:07.202550  661844 cri.go:89] found id: ""
	I1201 22:05:07.202579  661844 logs.go:282] 0 containers: []
	W1201 22:05:07.202595  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:07.202602  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:07.202664  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:07.233840  661844 cri.go:89] found id: ""
	I1201 22:05:07.233867  661844 logs.go:282] 0 containers: []
	W1201 22:05:07.233877  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:07.233883  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:07.233949  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:07.263110  661844 cri.go:89] found id: ""
	I1201 22:05:07.263165  661844 logs.go:282] 0 containers: []
	W1201 22:05:07.263176  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:07.263185  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:07.263256  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:07.292026  661844 cri.go:89] found id: ""
	I1201 22:05:07.292056  661844 logs.go:282] 0 containers: []
	W1201 22:05:07.292065  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:07.292073  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:07.292138  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:07.319071  661844 cri.go:89] found id: ""
	I1201 22:05:07.319099  661844 logs.go:282] 0 containers: []
	W1201 22:05:07.319108  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:07.319115  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:07.319203  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:07.347286  661844 cri.go:89] found id: ""
	I1201 22:05:07.347314  661844 logs.go:282] 0 containers: []
	W1201 22:05:07.347323  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:07.347330  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:07.347395  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:07.373493  661844 cri.go:89] found id: ""
	I1201 22:05:07.373520  661844 logs.go:282] 0 containers: []
	W1201 22:05:07.373530  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:07.373536  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:07.373597  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:07.400974  661844 cri.go:89] found id: ""
	I1201 22:05:07.400998  661844 logs.go:282] 0 containers: []
	W1201 22:05:07.401007  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:07.401017  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:07.401029  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:07.470839  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:07.470868  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:07.470881  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:07.513067  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:07.513104  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:07.541682  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:07.541712  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:07.609714  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:07.609747  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:10.131647  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:10.145611  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:10.145787  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:10.228532  661844 cri.go:89] found id: ""
	I1201 22:05:10.228613  661844 logs.go:282] 0 containers: []
	W1201 22:05:10.228637  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:10.228658  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:10.228801  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:10.287622  661844 cri.go:89] found id: ""
	I1201 22:05:10.287648  661844 logs.go:282] 0 containers: []
	W1201 22:05:10.287657  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:10.287663  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:10.287727  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:10.321625  661844 cri.go:89] found id: ""
	I1201 22:05:10.321646  661844 logs.go:282] 0 containers: []
	W1201 22:05:10.321654  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:10.321661  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:10.321725  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:10.368109  661844 cri.go:89] found id: ""
	I1201 22:05:10.368132  661844 logs.go:282] 0 containers: []
	W1201 22:05:10.368142  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:10.368148  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:10.368208  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:10.400209  661844 cri.go:89] found id: ""
	I1201 22:05:10.400230  661844 logs.go:282] 0 containers: []
	W1201 22:05:10.400239  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:10.400245  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:10.400305  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:10.434116  661844 cri.go:89] found id: ""
	I1201 22:05:10.434137  661844 logs.go:282] 0 containers: []
	W1201 22:05:10.434146  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:10.434152  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:10.434212  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:10.469211  661844 cri.go:89] found id: ""
	I1201 22:05:10.469291  661844 logs.go:282] 0 containers: []
	W1201 22:05:10.469303  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:10.469311  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:10.469414  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:10.505984  661844 cri.go:89] found id: ""
	I1201 22:05:10.506005  661844 logs.go:282] 0 containers: []
	W1201 22:05:10.506013  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:10.506022  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:10.506034  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:10.574772  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:10.574814  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:10.595039  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:10.595069  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:10.663094  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:10.663116  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:10.663162  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:10.703733  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:10.703769  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:13.243411  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:13.256000  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:13.256072  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:13.289065  661844 cri.go:89] found id: ""
	I1201 22:05:13.289091  661844 logs.go:282] 0 containers: []
	W1201 22:05:13.289099  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:13.289105  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:13.289165  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:13.326024  661844 cri.go:89] found id: ""
	I1201 22:05:13.326046  661844 logs.go:282] 0 containers: []
	W1201 22:05:13.326055  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:13.326062  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:13.326125  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:13.359013  661844 cri.go:89] found id: ""
	I1201 22:05:13.359035  661844 logs.go:282] 0 containers: []
	W1201 22:05:13.359043  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:13.359050  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:13.359110  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:13.393250  661844 cri.go:89] found id: ""
	I1201 22:05:13.393273  661844 logs.go:282] 0 containers: []
	W1201 22:05:13.393281  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:13.393288  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:13.393349  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:13.431380  661844 cri.go:89] found id: ""
	I1201 22:05:13.431401  661844 logs.go:282] 0 containers: []
	W1201 22:05:13.431410  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:13.431416  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:13.431476  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:13.470153  661844 cri.go:89] found id: ""
	I1201 22:05:13.470175  661844 logs.go:282] 0 containers: []
	W1201 22:05:13.470183  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:13.470190  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:13.470251  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:13.497556  661844 cri.go:89] found id: ""
	I1201 22:05:13.497629  661844 logs.go:282] 0 containers: []
	W1201 22:05:13.497670  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:13.497699  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:13.497795  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:13.537675  661844 cri.go:89] found id: ""
	I1201 22:05:13.537749  661844 logs.go:282] 0 containers: []
	W1201 22:05:13.537770  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:13.537792  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:13.537829  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:13.612172  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:13.612210  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:13.630496  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:13.630528  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:13.750212  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:13.750242  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:13.750257  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:13.799667  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:13.799703  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:16.354641  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:16.366468  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:16.366544  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:16.397143  661844 cri.go:89] found id: ""
	I1201 22:05:16.397174  661844 logs.go:282] 0 containers: []
	W1201 22:05:16.397186  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:16.397196  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:16.397266  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:16.424757  661844 cri.go:89] found id: ""
	I1201 22:05:16.424783  661844 logs.go:282] 0 containers: []
	W1201 22:05:16.424792  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:16.424799  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:16.424865  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:16.459757  661844 cri.go:89] found id: ""
	I1201 22:05:16.459783  661844 logs.go:282] 0 containers: []
	W1201 22:05:16.459792  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:16.459799  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:16.459856  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:16.500321  661844 cri.go:89] found id: ""
	I1201 22:05:16.500346  661844 logs.go:282] 0 containers: []
	W1201 22:05:16.500356  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:16.500363  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:16.500428  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:16.538269  661844 cri.go:89] found id: ""
	I1201 22:05:16.538301  661844 logs.go:282] 0 containers: []
	W1201 22:05:16.538310  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:16.538317  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:16.538390  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:16.577711  661844 cri.go:89] found id: ""
	I1201 22:05:16.577748  661844 logs.go:282] 0 containers: []
	W1201 22:05:16.577758  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:16.577766  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:16.577844  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:16.617194  661844 cri.go:89] found id: ""
	I1201 22:05:16.617265  661844 logs.go:282] 0 containers: []
	W1201 22:05:16.617292  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:16.617313  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:16.617422  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:16.645812  661844 cri.go:89] found id: ""
	I1201 22:05:16.645847  661844 logs.go:282] 0 containers: []
	W1201 22:05:16.645856  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:16.645866  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:16.645878  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:16.695510  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:16.695550  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:16.752359  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:16.752393  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:16.835985  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:16.836026  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:16.856541  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:16.856578  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:16.949974  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:19.450267  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:19.461600  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:19.461677  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:19.488684  661844 cri.go:89] found id: ""
	I1201 22:05:19.488708  661844 logs.go:282] 0 containers: []
	W1201 22:05:19.488724  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:19.488732  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:19.488807  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:19.516969  661844 cri.go:89] found id: ""
	I1201 22:05:19.516995  661844 logs.go:282] 0 containers: []
	W1201 22:05:19.517004  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:19.517010  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:19.517078  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:19.544047  661844 cri.go:89] found id: ""
	I1201 22:05:19.544071  661844 logs.go:282] 0 containers: []
	W1201 22:05:19.544081  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:19.544087  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:19.544143  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:19.571585  661844 cri.go:89] found id: ""
	I1201 22:05:19.571609  661844 logs.go:282] 0 containers: []
	W1201 22:05:19.571619  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:19.571626  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:19.571687  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:19.600324  661844 cri.go:89] found id: ""
	I1201 22:05:19.600352  661844 logs.go:282] 0 containers: []
	W1201 22:05:19.600362  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:19.600368  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:19.600432  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:19.627945  661844 cri.go:89] found id: ""
	I1201 22:05:19.627972  661844 logs.go:282] 0 containers: []
	W1201 22:05:19.627983  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:19.627991  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:19.628055  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:19.654625  661844 cri.go:89] found id: ""
	I1201 22:05:19.654648  661844 logs.go:282] 0 containers: []
	W1201 22:05:19.654656  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:19.654663  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:19.654722  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:19.684328  661844 cri.go:89] found id: ""
	I1201 22:05:19.684407  661844 logs.go:282] 0 containers: []
	W1201 22:05:19.684423  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:19.684434  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:19.684447  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:19.768219  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:19.768262  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:19.786235  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:19.786270  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:19.857864  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:19.857928  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:19.857944  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:19.910071  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:19.910159  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:22.496067  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:22.507937  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:22.508026  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:22.536017  661844 cri.go:89] found id: ""
	I1201 22:05:22.536043  661844 logs.go:282] 0 containers: []
	W1201 22:05:22.536052  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:22.536058  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:22.536116  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:22.562869  661844 cri.go:89] found id: ""
	I1201 22:05:22.562898  661844 logs.go:282] 0 containers: []
	W1201 22:05:22.562907  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:22.562914  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:22.562976  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:22.588519  661844 cri.go:89] found id: ""
	I1201 22:05:22.588543  661844 logs.go:282] 0 containers: []
	W1201 22:05:22.588552  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:22.588558  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:22.588618  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:22.619127  661844 cri.go:89] found id: ""
	I1201 22:05:22.619165  661844 logs.go:282] 0 containers: []
	W1201 22:05:22.619174  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:22.619181  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:22.619252  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:22.647526  661844 cri.go:89] found id: ""
	I1201 22:05:22.647553  661844 logs.go:282] 0 containers: []
	W1201 22:05:22.647563  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:22.647569  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:22.647631  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:22.673367  661844 cri.go:89] found id: ""
	I1201 22:05:22.673394  661844 logs.go:282] 0 containers: []
	W1201 22:05:22.673403  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:22.673410  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:22.673474  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:22.700487  661844 cri.go:89] found id: ""
	I1201 22:05:22.700511  661844 logs.go:282] 0 containers: []
	W1201 22:05:22.700520  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:22.700527  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:22.700590  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:22.729974  661844 cri.go:89] found id: ""
	I1201 22:05:22.729998  661844 logs.go:282] 0 containers: []
	W1201 22:05:22.730007  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:22.730017  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:22.730029  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:22.770817  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:22.770852  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:22.800640  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:22.800684  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:22.868037  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:22.868078  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:22.885066  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:22.885098  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:22.979967  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:25.480731  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:25.492000  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:25.492078  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:25.522718  661844 cri.go:89] found id: ""
	I1201 22:05:25.522744  661844 logs.go:282] 0 containers: []
	W1201 22:05:25.522753  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:25.522761  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:25.522822  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:25.549367  661844 cri.go:89] found id: ""
	I1201 22:05:25.549395  661844 logs.go:282] 0 containers: []
	W1201 22:05:25.549404  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:25.549413  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:25.549478  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:25.576525  661844 cri.go:89] found id: ""
	I1201 22:05:25.576552  661844 logs.go:282] 0 containers: []
	W1201 22:05:25.576561  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:25.576568  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:25.576633  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:25.603860  661844 cri.go:89] found id: ""
	I1201 22:05:25.603887  661844 logs.go:282] 0 containers: []
	W1201 22:05:25.603896  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:25.603902  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:25.603961  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:25.633314  661844 cri.go:89] found id: ""
	I1201 22:05:25.633354  661844 logs.go:282] 0 containers: []
	W1201 22:05:25.633365  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:25.633373  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:25.633449  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:25.664258  661844 cri.go:89] found id: ""
	I1201 22:05:25.664335  661844 logs.go:282] 0 containers: []
	W1201 22:05:25.664347  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:25.664354  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:25.664448  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:25.694466  661844 cri.go:89] found id: ""
	I1201 22:05:25.694494  661844 logs.go:282] 0 containers: []
	W1201 22:05:25.694504  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:25.694511  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:25.694584  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:25.728468  661844 cri.go:89] found id: ""
	I1201 22:05:25.728492  661844 logs.go:282] 0 containers: []
	W1201 22:05:25.728501  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:25.728510  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:25.728522  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:25.746482  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:25.746511  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:25.812817  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:25.812849  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:25.812865  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:25.857211  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:25.857252  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:25.887403  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:25.887441  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:28.464279  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:28.475718  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:28.475792  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:28.502682  661844 cri.go:89] found id: ""
	I1201 22:05:28.502710  661844 logs.go:282] 0 containers: []
	W1201 22:05:28.502719  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:28.502726  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:28.502789  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:28.532332  661844 cri.go:89] found id: ""
	I1201 22:05:28.532357  661844 logs.go:282] 0 containers: []
	W1201 22:05:28.532366  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:28.532373  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:28.532434  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:28.559037  661844 cri.go:89] found id: ""
	I1201 22:05:28.559063  661844 logs.go:282] 0 containers: []
	W1201 22:05:28.559072  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:28.559078  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:28.559170  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:28.585851  661844 cri.go:89] found id: ""
	I1201 22:05:28.585876  661844 logs.go:282] 0 containers: []
	W1201 22:05:28.585885  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:28.585892  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:28.585953  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:28.612821  661844 cri.go:89] found id: ""
	I1201 22:05:28.612847  661844 logs.go:282] 0 containers: []
	W1201 22:05:28.612859  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:28.612866  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:28.612930  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:28.640733  661844 cri.go:89] found id: ""
	I1201 22:05:28.640762  661844 logs.go:282] 0 containers: []
	W1201 22:05:28.640772  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:28.640780  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:28.640844  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:28.669612  661844 cri.go:89] found id: ""
	I1201 22:05:28.669698  661844 logs.go:282] 0 containers: []
	W1201 22:05:28.669722  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:28.669754  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:28.669837  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:28.696138  661844 cri.go:89] found id: ""
	I1201 22:05:28.696161  661844 logs.go:282] 0 containers: []
	W1201 22:05:28.696170  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:28.696178  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:28.696192  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:28.713334  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:28.713364  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:28.783704  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:28.783725  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:28.783739  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:28.824962  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:28.824997  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:28.855610  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:28.855640  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:31.424417  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:31.435322  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:31.435394  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:31.463108  661844 cri.go:89] found id: ""
	I1201 22:05:31.463157  661844 logs.go:282] 0 containers: []
	W1201 22:05:31.463167  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:31.463174  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:31.463237  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:31.493618  661844 cri.go:89] found id: ""
	I1201 22:05:31.493645  661844 logs.go:282] 0 containers: []
	W1201 22:05:31.493654  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:31.493667  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:31.493724  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:31.522997  661844 cri.go:89] found id: ""
	I1201 22:05:31.523023  661844 logs.go:282] 0 containers: []
	W1201 22:05:31.523032  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:31.523039  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:31.523100  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:31.549790  661844 cri.go:89] found id: ""
	I1201 22:05:31.549816  661844 logs.go:282] 0 containers: []
	W1201 22:05:31.549825  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:31.549832  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:31.549891  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:31.578634  661844 cri.go:89] found id: ""
	I1201 22:05:31.578661  661844 logs.go:282] 0 containers: []
	W1201 22:05:31.578670  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:31.578677  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:31.578738  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:31.605508  661844 cri.go:89] found id: ""
	I1201 22:05:31.605531  661844 logs.go:282] 0 containers: []
	W1201 22:05:31.605539  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:31.605546  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:31.605606  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:31.632229  661844 cri.go:89] found id: ""
	I1201 22:05:31.632254  661844 logs.go:282] 0 containers: []
	W1201 22:05:31.632264  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:31.632272  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:31.632340  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:31.659637  661844 cri.go:89] found id: ""
	I1201 22:05:31.659659  661844 logs.go:282] 0 containers: []
	W1201 22:05:31.659668  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:31.659678  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:31.659690  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:31.730981  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:31.731017  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:31.748428  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:31.748464  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:31.822251  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:31.822269  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:31.822282  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:31.862963  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:31.863037  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:34.399245  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:34.412428  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:34.412503  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:34.462184  661844 cri.go:89] found id: ""
	I1201 22:05:34.462211  661844 logs.go:282] 0 containers: []
	W1201 22:05:34.462221  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:34.462229  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:34.462292  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:34.513563  661844 cri.go:89] found id: ""
	I1201 22:05:34.513590  661844 logs.go:282] 0 containers: []
	W1201 22:05:34.513601  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:34.513608  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:34.513676  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:34.553440  661844 cri.go:89] found id: ""
	I1201 22:05:34.553464  661844 logs.go:282] 0 containers: []
	W1201 22:05:34.553473  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:34.553481  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:34.553548  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:34.585783  661844 cri.go:89] found id: ""
	I1201 22:05:34.585807  661844 logs.go:282] 0 containers: []
	W1201 22:05:34.585815  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:34.585822  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:34.585879  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:34.616462  661844 cri.go:89] found id: ""
	I1201 22:05:34.616483  661844 logs.go:282] 0 containers: []
	W1201 22:05:34.616491  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:34.616498  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:34.616562  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:34.644138  661844 cri.go:89] found id: ""
	I1201 22:05:34.644160  661844 logs.go:282] 0 containers: []
	W1201 22:05:34.644168  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:34.644178  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:34.644238  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:34.680826  661844 cri.go:89] found id: ""
	I1201 22:05:34.680849  661844 logs.go:282] 0 containers: []
	W1201 22:05:34.680859  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:34.680866  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:34.680934  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:34.711884  661844 cri.go:89] found id: ""
	I1201 22:05:34.711906  661844 logs.go:282] 0 containers: []
	W1201 22:05:34.711915  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:34.711925  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:34.711941  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:34.736457  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:34.736546  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:34.831611  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:34.831631  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:34.831644  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:34.879357  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:34.879447  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:34.924161  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:34.924238  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:37.522599  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:37.534087  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:37.534168  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:37.561263  661844 cri.go:89] found id: ""
	I1201 22:05:37.561287  661844 logs.go:282] 0 containers: []
	W1201 22:05:37.561296  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:37.561304  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:37.561365  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:37.589287  661844 cri.go:89] found id: ""
	I1201 22:05:37.589309  661844 logs.go:282] 0 containers: []
	W1201 22:05:37.589318  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:37.589324  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:37.589405  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:37.617440  661844 cri.go:89] found id: ""
	I1201 22:05:37.617463  661844 logs.go:282] 0 containers: []
	W1201 22:05:37.617471  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:37.617478  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:37.617541  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:37.644309  661844 cri.go:89] found id: ""
	I1201 22:05:37.644336  661844 logs.go:282] 0 containers: []
	W1201 22:05:37.644345  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:37.644352  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:37.644413  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:37.670783  661844 cri.go:89] found id: ""
	I1201 22:05:37.670805  661844 logs.go:282] 0 containers: []
	W1201 22:05:37.670814  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:37.670821  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:37.670879  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:37.697721  661844 cri.go:89] found id: ""
	I1201 22:05:37.697748  661844 logs.go:282] 0 containers: []
	W1201 22:05:37.697757  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:37.697764  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:37.697831  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:37.726282  661844 cri.go:89] found id: ""
	I1201 22:05:37.726306  661844 logs.go:282] 0 containers: []
	W1201 22:05:37.726315  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:37.726322  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:37.726384  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:37.753005  661844 cri.go:89] found id: ""
	I1201 22:05:37.753029  661844 logs.go:282] 0 containers: []
	W1201 22:05:37.753039  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:37.753049  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:37.753062  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:37.794666  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:37.794698  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:37.824439  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:37.824467  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:37.891640  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:37.891676  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:37.910614  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:37.910730  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:37.993086  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:40.494295  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:40.505583  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:40.505690  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:40.532954  661844 cri.go:89] found id: ""
	I1201 22:05:40.532980  661844 logs.go:282] 0 containers: []
	W1201 22:05:40.532989  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:40.532996  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:40.533063  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:40.560307  661844 cri.go:89] found id: ""
	I1201 22:05:40.560335  661844 logs.go:282] 0 containers: []
	W1201 22:05:40.560346  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:40.560352  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:40.560421  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:40.591843  661844 cri.go:89] found id: ""
	I1201 22:05:40.591873  661844 logs.go:282] 0 containers: []
	W1201 22:05:40.591884  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:40.591892  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:40.591968  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:40.620168  661844 cri.go:89] found id: ""
	I1201 22:05:40.620193  661844 logs.go:282] 0 containers: []
	W1201 22:05:40.620201  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:40.620209  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:40.620275  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:40.647340  661844 cri.go:89] found id: ""
	I1201 22:05:40.647366  661844 logs.go:282] 0 containers: []
	W1201 22:05:40.647375  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:40.647382  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:40.647445  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:40.672885  661844 cri.go:89] found id: ""
	I1201 22:05:40.672913  661844 logs.go:282] 0 containers: []
	W1201 22:05:40.672923  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:40.672933  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:40.672997  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:40.703241  661844 cri.go:89] found id: ""
	I1201 22:05:40.703269  661844 logs.go:282] 0 containers: []
	W1201 22:05:40.703278  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:40.703285  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:40.703350  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:40.730566  661844 cri.go:89] found id: ""
	I1201 22:05:40.730603  661844 logs.go:282] 0 containers: []
	W1201 22:05:40.730612  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:40.730622  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:40.730634  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:40.798745  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:40.798789  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:40.818857  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:40.818903  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:40.895346  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:40.895368  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:40.895381  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:40.942279  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:40.942362  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:43.486494  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:43.497329  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:43.497400  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:43.524124  661844 cri.go:89] found id: ""
	I1201 22:05:43.524151  661844 logs.go:282] 0 containers: []
	W1201 22:05:43.524160  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:43.524167  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:43.524227  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:43.553541  661844 cri.go:89] found id: ""
	I1201 22:05:43.553567  661844 logs.go:282] 0 containers: []
	W1201 22:05:43.553576  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:43.553582  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:43.553661  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:43.583745  661844 cri.go:89] found id: ""
	I1201 22:05:43.583773  661844 logs.go:282] 0 containers: []
	W1201 22:05:43.583783  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:43.583790  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:43.583852  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:43.610077  661844 cri.go:89] found id: ""
	I1201 22:05:43.610111  661844 logs.go:282] 0 containers: []
	W1201 22:05:43.610124  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:43.610131  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:43.610202  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:43.636222  661844 cri.go:89] found id: ""
	I1201 22:05:43.636246  661844 logs.go:282] 0 containers: []
	W1201 22:05:43.636256  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:43.636263  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:43.636325  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:43.666504  661844 cri.go:89] found id: ""
	I1201 22:05:43.666580  661844 logs.go:282] 0 containers: []
	W1201 22:05:43.666607  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:43.666627  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:43.666717  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:43.697371  661844 cri.go:89] found id: ""
	I1201 22:05:43.697450  661844 logs.go:282] 0 containers: []
	W1201 22:05:43.697474  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:43.697490  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:43.697567  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:43.725496  661844 cri.go:89] found id: ""
	I1201 22:05:43.725530  661844 logs.go:282] 0 containers: []
	W1201 22:05:43.725539  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:43.725554  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:43.725566  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:43.759217  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:43.759259  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:43.830106  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:43.830144  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:43.851448  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:43.851480  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:43.991756  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:43.991778  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:43.991791  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:46.550849  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:46.561823  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:46.561898  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:46.588480  661844 cri.go:89] found id: ""
	I1201 22:05:46.588502  661844 logs.go:282] 0 containers: []
	W1201 22:05:46.588511  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:46.588518  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:46.588574  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:46.616891  661844 cri.go:89] found id: ""
	I1201 22:05:46.616917  661844 logs.go:282] 0 containers: []
	W1201 22:05:46.616926  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:46.616933  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:46.617007  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:46.644197  661844 cri.go:89] found id: ""
	I1201 22:05:46.644229  661844 logs.go:282] 0 containers: []
	W1201 22:05:46.644239  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:46.644245  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:46.644305  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:46.674335  661844 cri.go:89] found id: ""
	I1201 22:05:46.674357  661844 logs.go:282] 0 containers: []
	W1201 22:05:46.674365  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:46.674372  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:46.674438  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:46.699978  661844 cri.go:89] found id: ""
	I1201 22:05:46.700004  661844 logs.go:282] 0 containers: []
	W1201 22:05:46.700012  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:46.700019  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:46.700083  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:46.725894  661844 cri.go:89] found id: ""
	I1201 22:05:46.725918  661844 logs.go:282] 0 containers: []
	W1201 22:05:46.725926  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:46.725932  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:46.725994  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:46.751988  661844 cri.go:89] found id: ""
	I1201 22:05:46.752067  661844 logs.go:282] 0 containers: []
	W1201 22:05:46.752092  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:46.752113  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:46.752207  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:46.777987  661844 cri.go:89] found id: ""
	I1201 22:05:46.778010  661844 logs.go:282] 0 containers: []
	W1201 22:05:46.778019  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:46.778035  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:46.778047  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:46.845823  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:46.845865  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:46.863183  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:46.863267  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:46.936325  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:46.936348  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:46.936403  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:46.985793  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:46.985835  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:49.519985  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:49.531570  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:49.531652  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:49.557343  661844 cri.go:89] found id: ""
	I1201 22:05:49.557371  661844 logs.go:282] 0 containers: []
	W1201 22:05:49.557381  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:49.557389  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:49.557450  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:49.584476  661844 cri.go:89] found id: ""
	I1201 22:05:49.584504  661844 logs.go:282] 0 containers: []
	W1201 22:05:49.584514  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:49.584521  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:49.584587  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:49.614580  661844 cri.go:89] found id: ""
	I1201 22:05:49.614602  661844 logs.go:282] 0 containers: []
	W1201 22:05:49.614611  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:49.614618  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:49.614682  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:49.642007  661844 cri.go:89] found id: ""
	I1201 22:05:49.642029  661844 logs.go:282] 0 containers: []
	W1201 22:05:49.642038  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:49.642045  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:49.642107  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:49.670029  661844 cri.go:89] found id: ""
	I1201 22:05:49.670054  661844 logs.go:282] 0 containers: []
	W1201 22:05:49.670063  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:49.670069  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:49.670130  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:49.696860  661844 cri.go:89] found id: ""
	I1201 22:05:49.696887  661844 logs.go:282] 0 containers: []
	W1201 22:05:49.696895  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:49.696902  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:49.696963  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:49.722409  661844 cri.go:89] found id: ""
	I1201 22:05:49.722435  661844 logs.go:282] 0 containers: []
	W1201 22:05:49.722444  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:49.722450  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:49.722512  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:49.750058  661844 cri.go:89] found id: ""
	I1201 22:05:49.750084  661844 logs.go:282] 0 containers: []
	W1201 22:05:49.750095  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:49.750104  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:49.750115  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:49.817981  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:49.818017  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:49.840174  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:49.840225  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:49.913402  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:49.913473  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:49.913501  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:49.957137  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:49.957319  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:52.496258  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:52.507772  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:52.507843  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:52.535520  661844 cri.go:89] found id: ""
	I1201 22:05:52.535548  661844 logs.go:282] 0 containers: []
	W1201 22:05:52.535558  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:52.535565  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:52.535627  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:52.561621  661844 cri.go:89] found id: ""
	I1201 22:05:52.561643  661844 logs.go:282] 0 containers: []
	W1201 22:05:52.561652  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:52.561658  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:52.561717  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:52.588620  661844 cri.go:89] found id: ""
	I1201 22:05:52.588642  661844 logs.go:282] 0 containers: []
	W1201 22:05:52.588651  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:52.588658  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:52.588720  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:52.615526  661844 cri.go:89] found id: ""
	I1201 22:05:52.615557  661844 logs.go:282] 0 containers: []
	W1201 22:05:52.615566  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:52.615573  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:52.615656  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:52.646000  661844 cri.go:89] found id: ""
	I1201 22:05:52.646022  661844 logs.go:282] 0 containers: []
	W1201 22:05:52.646030  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:52.646037  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:52.646094  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:52.672084  661844 cri.go:89] found id: ""
	I1201 22:05:52.672106  661844 logs.go:282] 0 containers: []
	W1201 22:05:52.672114  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:52.672121  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:52.672179  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:52.699039  661844 cri.go:89] found id: ""
	I1201 22:05:52.699063  661844 logs.go:282] 0 containers: []
	W1201 22:05:52.699073  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:52.699080  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:52.699156  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:52.726843  661844 cri.go:89] found id: ""
	I1201 22:05:52.726869  661844 logs.go:282] 0 containers: []
	W1201 22:05:52.726892  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:52.726901  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:52.726922  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:52.794897  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:52.794938  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:52.811896  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:52.811924  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:52.883342  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:52.883367  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:52.883387  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:52.944462  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:52.944543  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:55.481756  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:55.493643  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:55.493733  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:55.521306  661844 cri.go:89] found id: ""
	I1201 22:05:55.521330  661844 logs.go:282] 0 containers: []
	W1201 22:05:55.521338  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:55.521344  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:55.521415  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:55.574461  661844 cri.go:89] found id: ""
	I1201 22:05:55.574483  661844 logs.go:282] 0 containers: []
	W1201 22:05:55.574493  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:55.574499  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:55.574559  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:55.631087  661844 cri.go:89] found id: ""
	I1201 22:05:55.631110  661844 logs.go:282] 0 containers: []
	W1201 22:05:55.631118  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:55.631125  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:55.631205  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:55.669035  661844 cri.go:89] found id: ""
	I1201 22:05:55.669062  661844 logs.go:282] 0 containers: []
	W1201 22:05:55.669072  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:55.669079  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:55.669140  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:55.705576  661844 cri.go:89] found id: ""
	I1201 22:05:55.705599  661844 logs.go:282] 0 containers: []
	W1201 22:05:55.705608  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:55.705615  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:55.705675  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:55.740795  661844 cri.go:89] found id: ""
	I1201 22:05:55.740822  661844 logs.go:282] 0 containers: []
	W1201 22:05:55.740832  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:55.740838  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:55.740902  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:55.783168  661844 cri.go:89] found id: ""
	I1201 22:05:55.783205  661844 logs.go:282] 0 containers: []
	W1201 22:05:55.783217  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:55.783225  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:55.783297  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:55.820547  661844 cri.go:89] found id: ""
	I1201 22:05:55.820574  661844 logs.go:282] 0 containers: []
	W1201 22:05:55.820583  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:55.820592  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:55.820604  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:55.906446  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:55.906532  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:55.936509  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:55.936539  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:56.057079  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:56.057105  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:56.057120  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:56.099522  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:56.099558  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:05:58.631362  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:05:58.643705  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:05:58.643792  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:05:58.682407  661844 cri.go:89] found id: ""
	I1201 22:05:58.682431  661844 logs.go:282] 0 containers: []
	W1201 22:05:58.682441  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:05:58.682448  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:05:58.682512  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:05:58.734083  661844 cri.go:89] found id: ""
	I1201 22:05:58.734104  661844 logs.go:282] 0 containers: []
	W1201 22:05:58.734112  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:05:58.734119  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:05:58.734177  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:05:58.767419  661844 cri.go:89] found id: ""
	I1201 22:05:58.767441  661844 logs.go:282] 0 containers: []
	W1201 22:05:58.767463  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:05:58.767470  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:05:58.767527  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:05:58.813962  661844 cri.go:89] found id: ""
	I1201 22:05:58.813990  661844 logs.go:282] 0 containers: []
	W1201 22:05:58.814000  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:05:58.814007  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:05:58.814072  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:05:58.853048  661844 cri.go:89] found id: ""
	I1201 22:05:58.853075  661844 logs.go:282] 0 containers: []
	W1201 22:05:58.853085  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:05:58.853092  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:05:58.853154  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:05:58.884225  661844 cri.go:89] found id: ""
	I1201 22:05:58.884251  661844 logs.go:282] 0 containers: []
	W1201 22:05:58.884260  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:05:58.884267  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:05:58.884325  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:05:58.937463  661844 cri.go:89] found id: ""
	I1201 22:05:58.937490  661844 logs.go:282] 0 containers: []
	W1201 22:05:58.937500  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:05:58.937506  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:05:58.937566  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:05:59.001873  661844 cri.go:89] found id: ""
	I1201 22:05:59.001940  661844 logs.go:282] 0 containers: []
	W1201 22:05:59.001967  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:05:59.001992  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:05:59.002030  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:05:59.080493  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:05:59.080574  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:05:59.097955  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:05:59.098035  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:05:59.193410  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:05:59.193471  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:05:59.193498  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:05:59.256488  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:05:59.256567  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:01.799311  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:01.812518  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:01.812619  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:01.846233  661844 cri.go:89] found id: ""
	I1201 22:06:01.846259  661844 logs.go:282] 0 containers: []
	W1201 22:06:01.846268  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:01.846275  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:01.846334  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:01.873271  661844 cri.go:89] found id: ""
	I1201 22:06:01.873296  661844 logs.go:282] 0 containers: []
	W1201 22:06:01.873304  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:01.873311  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:01.873368  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:01.899105  661844 cri.go:89] found id: ""
	I1201 22:06:01.899162  661844 logs.go:282] 0 containers: []
	W1201 22:06:01.899173  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:01.899181  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:01.899244  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:01.934087  661844 cri.go:89] found id: ""
	I1201 22:06:01.934116  661844 logs.go:282] 0 containers: []
	W1201 22:06:01.934125  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:01.934132  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:01.934222  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:01.965416  661844 cri.go:89] found id: ""
	I1201 22:06:01.965442  661844 logs.go:282] 0 containers: []
	W1201 22:06:01.965451  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:01.965458  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:01.965519  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:01.997308  661844 cri.go:89] found id: ""
	I1201 22:06:01.997334  661844 logs.go:282] 0 containers: []
	W1201 22:06:01.997355  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:01.997363  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:01.997430  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:02.031925  661844 cri.go:89] found id: ""
	I1201 22:06:02.031948  661844 logs.go:282] 0 containers: []
	W1201 22:06:02.031957  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:02.031964  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:02.032027  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:02.072158  661844 cri.go:89] found id: ""
	I1201 22:06:02.072181  661844 logs.go:282] 0 containers: []
	W1201 22:06:02.072192  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:02.072202  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:02.072232  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:02.169574  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:02.169592  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:02.169604  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:02.227296  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:02.227402  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:02.273862  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:02.273900  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:02.350963  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:02.351048  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:04.870702  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:04.881812  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:04.881883  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:04.921060  661844 cri.go:89] found id: ""
	I1201 22:06:04.921086  661844 logs.go:282] 0 containers: []
	W1201 22:06:04.921095  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:04.921102  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:04.921166  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:04.959237  661844 cri.go:89] found id: ""
	I1201 22:06:04.959261  661844 logs.go:282] 0 containers: []
	W1201 22:06:04.959269  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:04.959275  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:04.959335  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:04.990834  661844 cri.go:89] found id: ""
	I1201 22:06:04.990914  661844 logs.go:282] 0 containers: []
	W1201 22:06:04.990945  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:04.990978  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:04.991061  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:05.020888  661844 cri.go:89] found id: ""
	I1201 22:06:05.020913  661844 logs.go:282] 0 containers: []
	W1201 22:06:05.020922  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:05.020929  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:05.020992  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:05.049863  661844 cri.go:89] found id: ""
	I1201 22:06:05.049892  661844 logs.go:282] 0 containers: []
	W1201 22:06:05.049901  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:05.049908  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:05.049968  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:05.079181  661844 cri.go:89] found id: ""
	I1201 22:06:05.079208  661844 logs.go:282] 0 containers: []
	W1201 22:06:05.079218  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:05.079224  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:05.079286  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:05.105434  661844 cri.go:89] found id: ""
	I1201 22:06:05.105460  661844 logs.go:282] 0 containers: []
	W1201 22:06:05.105469  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:05.105475  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:05.105541  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:05.135625  661844 cri.go:89] found id: ""
	I1201 22:06:05.135650  661844 logs.go:282] 0 containers: []
	W1201 22:06:05.135659  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:05.135668  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:05.135681  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:05.205330  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:05.205367  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:05.223201  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:05.223231  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:05.291972  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:05.291996  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:05.292012  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:05.332477  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:05.332517  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:07.863742  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:07.874802  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:07.874875  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:07.902217  661844 cri.go:89] found id: ""
	I1201 22:06:07.902246  661844 logs.go:282] 0 containers: []
	W1201 22:06:07.902255  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:07.902263  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:07.902329  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:07.941651  661844 cri.go:89] found id: ""
	I1201 22:06:07.941678  661844 logs.go:282] 0 containers: []
	W1201 22:06:07.941688  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:07.941698  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:07.941784  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:07.986877  661844 cri.go:89] found id: ""
	I1201 22:06:07.986905  661844 logs.go:282] 0 containers: []
	W1201 22:06:07.986915  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:07.986929  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:07.986989  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:08.016694  661844 cri.go:89] found id: ""
	I1201 22:06:08.016723  661844 logs.go:282] 0 containers: []
	W1201 22:06:08.016733  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:08.016741  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:08.016807  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:08.044452  661844 cri.go:89] found id: ""
	I1201 22:06:08.044477  661844 logs.go:282] 0 containers: []
	W1201 22:06:08.044486  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:08.044492  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:08.044555  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:08.071862  661844 cri.go:89] found id: ""
	I1201 22:06:08.071890  661844 logs.go:282] 0 containers: []
	W1201 22:06:08.071900  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:08.071913  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:08.071978  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:08.099763  661844 cri.go:89] found id: ""
	I1201 22:06:08.099800  661844 logs.go:282] 0 containers: []
	W1201 22:06:08.099812  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:08.099820  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:08.099892  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:08.129625  661844 cri.go:89] found id: ""
	I1201 22:06:08.129652  661844 logs.go:282] 0 containers: []
	W1201 22:06:08.129662  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:08.129673  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:08.129686  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:08.159308  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:08.159337  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:08.227185  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:08.227223  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:08.245249  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:08.245282  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:08.318418  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:08.318471  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:08.318491  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:10.862696  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:10.873855  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:10.873930  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:10.900714  661844 cri.go:89] found id: ""
	I1201 22:06:10.900738  661844 logs.go:282] 0 containers: []
	W1201 22:06:10.900747  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:10.900754  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:10.900857  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:10.932623  661844 cri.go:89] found id: ""
	I1201 22:06:10.932648  661844 logs.go:282] 0 containers: []
	W1201 22:06:10.932657  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:10.932664  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:10.932726  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:10.967260  661844 cri.go:89] found id: ""
	I1201 22:06:10.967288  661844 logs.go:282] 0 containers: []
	W1201 22:06:10.967298  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:10.967305  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:10.967367  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:10.999472  661844 cri.go:89] found id: ""
	I1201 22:06:10.999495  661844 logs.go:282] 0 containers: []
	W1201 22:06:10.999504  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:10.999511  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:10.999579  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:11.028514  661844 cri.go:89] found id: ""
	I1201 22:06:11.028541  661844 logs.go:282] 0 containers: []
	W1201 22:06:11.028552  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:11.028559  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:11.028644  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:11.059688  661844 cri.go:89] found id: ""
	I1201 22:06:11.059718  661844 logs.go:282] 0 containers: []
	W1201 22:06:11.059729  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:11.059736  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:11.059825  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:11.088311  661844 cri.go:89] found id: ""
	I1201 22:06:11.088333  661844 logs.go:282] 0 containers: []
	W1201 22:06:11.088342  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:11.088349  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:11.088413  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:11.116454  661844 cri.go:89] found id: ""
	I1201 22:06:11.116481  661844 logs.go:282] 0 containers: []
	W1201 22:06:11.116490  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:11.116499  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:11.116510  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:11.161929  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:11.161975  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:11.193155  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:11.193185  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:11.263851  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:11.263888  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:11.281717  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:11.281746  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:11.348816  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:13.849082  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:13.860469  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:13.860541  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:13.889890  661844 cri.go:89] found id: ""
	I1201 22:06:13.889915  661844 logs.go:282] 0 containers: []
	W1201 22:06:13.889924  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:13.889931  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:13.889989  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:13.937944  661844 cri.go:89] found id: ""
	I1201 22:06:13.937971  661844 logs.go:282] 0 containers: []
	W1201 22:06:13.937981  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:13.937988  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:13.938051  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:13.972672  661844 cri.go:89] found id: ""
	I1201 22:06:13.972696  661844 logs.go:282] 0 containers: []
	W1201 22:06:13.972706  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:13.972714  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:13.972775  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:14.003933  661844 cri.go:89] found id: ""
	I1201 22:06:14.003960  661844 logs.go:282] 0 containers: []
	W1201 22:06:14.003968  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:14.003976  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:14.004050  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:14.033283  661844 cri.go:89] found id: ""
	I1201 22:06:14.033306  661844 logs.go:282] 0 containers: []
	W1201 22:06:14.033315  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:14.033323  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:14.033393  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:14.062027  661844 cri.go:89] found id: ""
	I1201 22:06:14.062102  661844 logs.go:282] 0 containers: []
	W1201 22:06:14.062137  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:14.062170  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:14.062249  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:14.088079  661844 cri.go:89] found id: ""
	I1201 22:06:14.088101  661844 logs.go:282] 0 containers: []
	W1201 22:06:14.088110  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:14.088116  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:14.088178  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:14.120434  661844 cri.go:89] found id: ""
	I1201 22:06:14.120458  661844 logs.go:282] 0 containers: []
	W1201 22:06:14.120467  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:14.120477  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:14.120488  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:14.140891  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:14.140924  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:14.214982  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:14.215001  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:14.215014  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:14.256508  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:14.256541  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:14.285960  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:14.285990  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:16.856092  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:16.869812  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:16.869891  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:16.896955  661844 cri.go:89] found id: ""
	I1201 22:06:16.896985  661844 logs.go:282] 0 containers: []
	W1201 22:06:16.896994  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:16.897001  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:16.897060  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:16.928177  661844 cri.go:89] found id: ""
	I1201 22:06:16.928198  661844 logs.go:282] 0 containers: []
	W1201 22:06:16.928207  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:16.928219  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:16.928278  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:16.967107  661844 cri.go:89] found id: ""
	I1201 22:06:16.967155  661844 logs.go:282] 0 containers: []
	W1201 22:06:16.967166  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:16.967173  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:16.967240  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:16.999126  661844 cri.go:89] found id: ""
	I1201 22:06:16.999178  661844 logs.go:282] 0 containers: []
	W1201 22:06:16.999187  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:16.999194  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:16.999260  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:17.027434  661844 cri.go:89] found id: ""
	I1201 22:06:17.027457  661844 logs.go:282] 0 containers: []
	W1201 22:06:17.027466  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:17.027472  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:17.027538  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:17.056994  661844 cri.go:89] found id: ""
	I1201 22:06:17.057021  661844 logs.go:282] 0 containers: []
	W1201 22:06:17.057031  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:17.057038  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:17.057097  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:17.082241  661844 cri.go:89] found id: ""
	I1201 22:06:17.082265  661844 logs.go:282] 0 containers: []
	W1201 22:06:17.082273  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:17.082280  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:17.082347  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:17.112981  661844 cri.go:89] found id: ""
	I1201 22:06:17.113007  661844 logs.go:282] 0 containers: []
	W1201 22:06:17.113017  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:17.113026  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:17.113038  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:17.180352  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:17.180391  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:17.197910  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:17.197941  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:17.269129  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:17.269150  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:17.269163  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:17.311937  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:17.311969  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:19.842005  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:19.853103  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:19.853179  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:19.880487  661844 cri.go:89] found id: ""
	I1201 22:06:19.880511  661844 logs.go:282] 0 containers: []
	W1201 22:06:19.880521  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:19.880528  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:19.880591  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:19.917439  661844 cri.go:89] found id: ""
	I1201 22:06:19.917467  661844 logs.go:282] 0 containers: []
	W1201 22:06:19.917477  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:19.917485  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:19.917559  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:19.954527  661844 cri.go:89] found id: ""
	I1201 22:06:19.954550  661844 logs.go:282] 0 containers: []
	W1201 22:06:19.954559  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:19.954567  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:19.954639  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:19.988082  661844 cri.go:89] found id: ""
	I1201 22:06:19.988111  661844 logs.go:282] 0 containers: []
	W1201 22:06:19.988122  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:19.988131  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:19.988206  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:20.022024  661844 cri.go:89] found id: ""
	I1201 22:06:20.022102  661844 logs.go:282] 0 containers: []
	W1201 22:06:20.022127  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:20.022148  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:20.022267  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:20.054558  661844 cri.go:89] found id: ""
	I1201 22:06:20.054638  661844 logs.go:282] 0 containers: []
	W1201 22:06:20.054663  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:20.054684  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:20.054803  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:20.083637  661844 cri.go:89] found id: ""
	I1201 22:06:20.083713  661844 logs.go:282] 0 containers: []
	W1201 22:06:20.083742  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:20.083757  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:20.083847  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:20.112595  661844 cri.go:89] found id: ""
	I1201 22:06:20.112634  661844 logs.go:282] 0 containers: []
	W1201 22:06:20.112646  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:20.112656  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:20.112669  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:20.131698  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:20.131731  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:20.207828  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:20.207894  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:20.207912  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:20.249110  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:20.249146  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:20.283350  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:20.283381  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:22.852417  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:22.865571  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:22.865649  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:22.897846  661844 cri.go:89] found id: ""
	I1201 22:06:22.897873  661844 logs.go:282] 0 containers: []
	W1201 22:06:22.897882  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:22.897888  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:22.897948  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:22.933089  661844 cri.go:89] found id: ""
	I1201 22:06:22.933118  661844 logs.go:282] 0 containers: []
	W1201 22:06:22.933129  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:22.933136  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:22.933201  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:22.980118  661844 cri.go:89] found id: ""
	I1201 22:06:22.980145  661844 logs.go:282] 0 containers: []
	W1201 22:06:22.980153  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:22.980159  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:22.980220  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:23.012388  661844 cri.go:89] found id: ""
	I1201 22:06:23.012414  661844 logs.go:282] 0 containers: []
	W1201 22:06:23.012424  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:23.012431  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:23.012491  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:23.041886  661844 cri.go:89] found id: ""
	I1201 22:06:23.041910  661844 logs.go:282] 0 containers: []
	W1201 22:06:23.041933  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:23.041940  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:23.042018  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:23.068953  661844 cri.go:89] found id: ""
	I1201 22:06:23.068978  661844 logs.go:282] 0 containers: []
	W1201 22:06:23.068987  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:23.068994  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:23.069061  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:23.095965  661844 cri.go:89] found id: ""
	I1201 22:06:23.095994  661844 logs.go:282] 0 containers: []
	W1201 22:06:23.096003  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:23.096009  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:23.096071  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:23.124393  661844 cri.go:89] found id: ""
	I1201 22:06:23.124421  661844 logs.go:282] 0 containers: []
	W1201 22:06:23.124430  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:23.124439  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:23.124456  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:23.155340  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:23.155369  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:23.226099  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:23.226140  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:23.248957  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:23.248988  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:23.343128  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:23.343221  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:23.343255  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:25.891230  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:25.907773  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:25.907850  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:25.951577  661844 cri.go:89] found id: ""
	I1201 22:06:25.951605  661844 logs.go:282] 0 containers: []
	W1201 22:06:25.951614  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:25.951620  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:25.951683  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:26.005313  661844 cri.go:89] found id: ""
	I1201 22:06:26.005344  661844 logs.go:282] 0 containers: []
	W1201 22:06:26.005356  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:26.005363  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:26.005439  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:26.051600  661844 cri.go:89] found id: ""
	I1201 22:06:26.051627  661844 logs.go:282] 0 containers: []
	W1201 22:06:26.051639  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:26.051647  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:26.051712  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:26.093361  661844 cri.go:89] found id: ""
	I1201 22:06:26.093386  661844 logs.go:282] 0 containers: []
	W1201 22:06:26.093395  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:26.093402  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:26.093467  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:26.135831  661844 cri.go:89] found id: ""
	I1201 22:06:26.135854  661844 logs.go:282] 0 containers: []
	W1201 22:06:26.135862  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:26.135868  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:26.135933  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:26.169365  661844 cri.go:89] found id: ""
	I1201 22:06:26.169387  661844 logs.go:282] 0 containers: []
	W1201 22:06:26.169395  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:26.169402  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:26.169463  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:26.220481  661844 cri.go:89] found id: ""
	I1201 22:06:26.220505  661844 logs.go:282] 0 containers: []
	W1201 22:06:26.220515  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:26.220525  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:26.220590  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:26.253004  661844 cri.go:89] found id: ""
	I1201 22:06:26.253027  661844 logs.go:282] 0 containers: []
	W1201 22:06:26.253036  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:26.253045  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:26.253058  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:26.342381  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:26.342399  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:26.342413  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:26.382757  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:26.382794  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:26.415799  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:26.415825  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:26.487680  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:26.487717  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:29.005478  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:29.018157  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:29.018227  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:29.060568  661844 cri.go:89] found id: ""
	I1201 22:06:29.060625  661844 logs.go:282] 0 containers: []
	W1201 22:06:29.060654  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:29.060679  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:29.060756  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:29.108119  661844 cri.go:89] found id: ""
	I1201 22:06:29.108143  661844 logs.go:282] 0 containers: []
	W1201 22:06:29.108153  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:29.108159  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:29.108227  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:29.150590  661844 cri.go:89] found id: ""
	I1201 22:06:29.150615  661844 logs.go:282] 0 containers: []
	W1201 22:06:29.150624  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:29.150630  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:29.150734  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:29.193384  661844 cri.go:89] found id: ""
	I1201 22:06:29.193408  661844 logs.go:282] 0 containers: []
	W1201 22:06:29.193417  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:29.193424  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:29.193515  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:29.242609  661844 cri.go:89] found id: ""
	I1201 22:06:29.242635  661844 logs.go:282] 0 containers: []
	W1201 22:06:29.242644  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:29.242651  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:29.242763  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:29.278609  661844 cri.go:89] found id: ""
	I1201 22:06:29.278687  661844 logs.go:282] 0 containers: []
	W1201 22:06:29.278710  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:29.278730  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:29.278828  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:29.317623  661844 cri.go:89] found id: ""
	I1201 22:06:29.317705  661844 logs.go:282] 0 containers: []
	W1201 22:06:29.317728  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:29.317749  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:29.317831  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:29.347553  661844 cri.go:89] found id: ""
	I1201 22:06:29.347637  661844 logs.go:282] 0 containers: []
	W1201 22:06:29.347660  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:29.347686  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:29.347725  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:29.368699  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:29.368782  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:29.457461  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:29.457537  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:29.457568  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:29.508919  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:29.508960  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:29.553067  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:29.553093  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:32.137062  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:32.148477  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:32.148548  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:32.175223  661844 cri.go:89] found id: ""
	I1201 22:06:32.175248  661844 logs.go:282] 0 containers: []
	W1201 22:06:32.175258  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:32.175266  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:32.175338  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:32.207609  661844 cri.go:89] found id: ""
	I1201 22:06:32.207635  661844 logs.go:282] 0 containers: []
	W1201 22:06:32.207644  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:32.207650  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:32.207712  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:32.234336  661844 cri.go:89] found id: ""
	I1201 22:06:32.234363  661844 logs.go:282] 0 containers: []
	W1201 22:06:32.234374  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:32.234381  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:32.234444  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:32.260826  661844 cri.go:89] found id: ""
	I1201 22:06:32.260851  661844 logs.go:282] 0 containers: []
	W1201 22:06:32.260860  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:32.260866  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:32.260926  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:32.288705  661844 cri.go:89] found id: ""
	I1201 22:06:32.288730  661844 logs.go:282] 0 containers: []
	W1201 22:06:32.288739  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:32.288746  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:32.288810  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:32.316581  661844 cri.go:89] found id: ""
	I1201 22:06:32.316607  661844 logs.go:282] 0 containers: []
	W1201 22:06:32.316617  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:32.316624  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:32.316687  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:32.344662  661844 cri.go:89] found id: ""
	I1201 22:06:32.344696  661844 logs.go:282] 0 containers: []
	W1201 22:06:32.344706  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:32.344713  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:32.344786  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:32.373856  661844 cri.go:89] found id: ""
	I1201 22:06:32.373879  661844 logs.go:282] 0 containers: []
	W1201 22:06:32.373888  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:32.373898  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:32.373910  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:32.449012  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:32.449079  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:32.467376  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:32.467479  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:32.560066  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:32.560146  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:32.560178  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:32.606365  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:32.606403  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:35.146075  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:35.156970  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:35.157046  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:35.183740  661844 cri.go:89] found id: ""
	I1201 22:06:35.183766  661844 logs.go:282] 0 containers: []
	W1201 22:06:35.183775  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:35.183782  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:35.183847  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:35.217883  661844 cri.go:89] found id: ""
	I1201 22:06:35.217910  661844 logs.go:282] 0 containers: []
	W1201 22:06:35.217919  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:35.217925  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:35.217986  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:35.244853  661844 cri.go:89] found id: ""
	I1201 22:06:35.244881  661844 logs.go:282] 0 containers: []
	W1201 22:06:35.244890  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:35.244896  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:35.244956  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:35.274159  661844 cri.go:89] found id: ""
	I1201 22:06:35.274186  661844 logs.go:282] 0 containers: []
	W1201 22:06:35.274196  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:35.274203  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:35.274272  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:35.305356  661844 cri.go:89] found id: ""
	I1201 22:06:35.305381  661844 logs.go:282] 0 containers: []
	W1201 22:06:35.305391  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:35.305398  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:35.305460  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:35.336424  661844 cri.go:89] found id: ""
	I1201 22:06:35.336451  661844 logs.go:282] 0 containers: []
	W1201 22:06:35.336461  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:35.336468  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:35.336533  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:35.362366  661844 cri.go:89] found id: ""
	I1201 22:06:35.362392  661844 logs.go:282] 0 containers: []
	W1201 22:06:35.362401  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:35.362410  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:35.362472  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:35.390373  661844 cri.go:89] found id: ""
	I1201 22:06:35.390398  661844 logs.go:282] 0 containers: []
	W1201 22:06:35.390407  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:35.390415  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:35.390427  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:35.458779  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:35.458815  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:35.475479  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:35.475512  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:35.547782  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:35.547806  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:35.547819  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:35.592051  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:35.592097  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:38.126324  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:38.138240  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:38.138311  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:38.165390  661844 cri.go:89] found id: ""
	I1201 22:06:38.165417  661844 logs.go:282] 0 containers: []
	W1201 22:06:38.165427  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:38.165433  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:38.165495  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:38.192855  661844 cri.go:89] found id: ""
	I1201 22:06:38.192877  661844 logs.go:282] 0 containers: []
	W1201 22:06:38.192886  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:38.192892  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:38.192954  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:38.222737  661844 cri.go:89] found id: ""
	I1201 22:06:38.222762  661844 logs.go:282] 0 containers: []
	W1201 22:06:38.222771  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:38.222778  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:38.222842  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:38.251042  661844 cri.go:89] found id: ""
	I1201 22:06:38.251069  661844 logs.go:282] 0 containers: []
	W1201 22:06:38.251078  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:38.251086  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:38.251174  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:38.278000  661844 cri.go:89] found id: ""
	I1201 22:06:38.278026  661844 logs.go:282] 0 containers: []
	W1201 22:06:38.278035  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:38.278042  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:38.278105  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:38.309895  661844 cri.go:89] found id: ""
	I1201 22:06:38.309918  661844 logs.go:282] 0 containers: []
	W1201 22:06:38.309927  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:38.309933  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:38.309997  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:38.336314  661844 cri.go:89] found id: ""
	I1201 22:06:38.336340  661844 logs.go:282] 0 containers: []
	W1201 22:06:38.336349  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:38.336355  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:38.336418  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:38.361822  661844 cri.go:89] found id: ""
	I1201 22:06:38.361848  661844 logs.go:282] 0 containers: []
	W1201 22:06:38.361858  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:38.361867  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:38.361879  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:38.391436  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:38.391463  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:38.462312  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:38.462351  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:38.483779  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:38.483816  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:38.552973  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:38.552994  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:38.553008  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:41.093874  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:41.105027  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:41.105102  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:41.131200  661844 cri.go:89] found id: ""
	I1201 22:06:41.131225  661844 logs.go:282] 0 containers: []
	W1201 22:06:41.131233  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:41.131240  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:41.131306  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:41.161458  661844 cri.go:89] found id: ""
	I1201 22:06:41.161482  661844 logs.go:282] 0 containers: []
	W1201 22:06:41.161491  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:41.161497  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:41.161554  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:41.191489  661844 cri.go:89] found id: ""
	I1201 22:06:41.191514  661844 logs.go:282] 0 containers: []
	W1201 22:06:41.191523  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:41.191530  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:41.191591  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:41.236905  661844 cri.go:89] found id: ""
	I1201 22:06:41.236933  661844 logs.go:282] 0 containers: []
	W1201 22:06:41.236943  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:41.236949  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:41.237014  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:41.278064  661844 cri.go:89] found id: ""
	I1201 22:06:41.278090  661844 logs.go:282] 0 containers: []
	W1201 22:06:41.278098  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:41.278104  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:41.278168  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:41.317371  661844 cri.go:89] found id: ""
	I1201 22:06:41.317396  661844 logs.go:282] 0 containers: []
	W1201 22:06:41.317405  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:41.317412  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:41.317473  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:41.354424  661844 cri.go:89] found id: ""
	I1201 22:06:41.354451  661844 logs.go:282] 0 containers: []
	W1201 22:06:41.354461  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:41.354468  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:41.354527  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:41.385592  661844 cri.go:89] found id: ""
	I1201 22:06:41.385616  661844 logs.go:282] 0 containers: []
	W1201 22:06:41.385637  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:41.385648  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:41.385662  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:41.403263  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:41.403463  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:41.494253  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:41.494373  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:41.494405  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:41.544694  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:41.544770  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:41.589134  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:41.589158  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:44.177828  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:44.189037  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:44.189108  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:44.222106  661844 cri.go:89] found id: ""
	I1201 22:06:44.222133  661844 logs.go:282] 0 containers: []
	W1201 22:06:44.222142  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:44.222150  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:44.222218  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:44.249683  661844 cri.go:89] found id: ""
	I1201 22:06:44.249708  661844 logs.go:282] 0 containers: []
	W1201 22:06:44.249717  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:44.249723  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:44.249784  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:44.277117  661844 cri.go:89] found id: ""
	I1201 22:06:44.277142  661844 logs.go:282] 0 containers: []
	W1201 22:06:44.277152  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:44.277159  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:44.277224  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:44.308335  661844 cri.go:89] found id: ""
	I1201 22:06:44.308360  661844 logs.go:282] 0 containers: []
	W1201 22:06:44.308368  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:44.308374  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:44.308434  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:44.335674  661844 cri.go:89] found id: ""
	I1201 22:06:44.335699  661844 logs.go:282] 0 containers: []
	W1201 22:06:44.335708  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:44.335715  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:44.335780  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:44.366507  661844 cri.go:89] found id: ""
	I1201 22:06:44.366533  661844 logs.go:282] 0 containers: []
	W1201 22:06:44.366543  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:44.366551  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:44.366622  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:44.394081  661844 cri.go:89] found id: ""
	I1201 22:06:44.394107  661844 logs.go:282] 0 containers: []
	W1201 22:06:44.394117  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:44.394123  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:44.394187  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:44.426837  661844 cri.go:89] found id: ""
	I1201 22:06:44.426864  661844 logs.go:282] 0 containers: []
	W1201 22:06:44.426874  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:44.426885  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:44.426897  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:44.497010  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:44.497049  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:44.514718  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:44.514751  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:44.585219  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:44.585239  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:44.585252  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:44.627872  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:44.627907  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:47.160423  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:47.171613  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:47.171680  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:47.204859  661844 cri.go:89] found id: ""
	I1201 22:06:47.204901  661844 logs.go:282] 0 containers: []
	W1201 22:06:47.204911  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:47.204919  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:47.205002  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:47.233376  661844 cri.go:89] found id: ""
	I1201 22:06:47.233411  661844 logs.go:282] 0 containers: []
	W1201 22:06:47.233430  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:47.233437  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:47.233507  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:47.266420  661844 cri.go:89] found id: ""
	I1201 22:06:47.266444  661844 logs.go:282] 0 containers: []
	W1201 22:06:47.266453  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:47.266459  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:47.266521  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:47.293541  661844 cri.go:89] found id: ""
	I1201 22:06:47.293629  661844 logs.go:282] 0 containers: []
	W1201 22:06:47.293666  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:47.293719  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:47.293812  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:47.326844  661844 cri.go:89] found id: ""
	I1201 22:06:47.326878  661844 logs.go:282] 0 containers: []
	W1201 22:06:47.326889  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:47.326896  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:47.326988  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:47.353235  661844 cri.go:89] found id: ""
	I1201 22:06:47.353258  661844 logs.go:282] 0 containers: []
	W1201 22:06:47.353267  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:47.353274  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:47.353337  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:47.380511  661844 cri.go:89] found id: ""
	I1201 22:06:47.380594  661844 logs.go:282] 0 containers: []
	W1201 22:06:47.380612  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:47.380620  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:47.380698  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:47.406462  661844 cri.go:89] found id: ""
	I1201 22:06:47.406488  661844 logs.go:282] 0 containers: []
	W1201 22:06:47.406497  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:47.406508  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:47.406521  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:47.423588  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:47.423622  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:47.492637  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:47.492702  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:47.492731  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:47.541086  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:47.541171  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:47.588379  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:47.588424  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:50.190691  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:50.202908  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:50.203031  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:50.233861  661844 cri.go:89] found id: ""
	I1201 22:06:50.233885  661844 logs.go:282] 0 containers: []
	W1201 22:06:50.233895  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:50.233901  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:50.233967  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:50.262757  661844 cri.go:89] found id: ""
	I1201 22:06:50.262780  661844 logs.go:282] 0 containers: []
	W1201 22:06:50.262789  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:50.262796  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:50.262859  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:50.288065  661844 cri.go:89] found id: ""
	I1201 22:06:50.288095  661844 logs.go:282] 0 containers: []
	W1201 22:06:50.288105  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:50.288112  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:50.288177  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:50.314474  661844 cri.go:89] found id: ""
	I1201 22:06:50.314495  661844 logs.go:282] 0 containers: []
	W1201 22:06:50.314504  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:50.314511  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:50.314568  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:50.340955  661844 cri.go:89] found id: ""
	I1201 22:06:50.340980  661844 logs.go:282] 0 containers: []
	W1201 22:06:50.340990  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:50.340996  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:50.341055  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:50.367092  661844 cri.go:89] found id: ""
	I1201 22:06:50.367118  661844 logs.go:282] 0 containers: []
	W1201 22:06:50.367127  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:50.367178  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:50.367249  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:50.396628  661844 cri.go:89] found id: ""
	I1201 22:06:50.396654  661844 logs.go:282] 0 containers: []
	W1201 22:06:50.396663  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:50.396670  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:50.396727  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:50.425850  661844 cri.go:89] found id: ""
	I1201 22:06:50.425873  661844 logs.go:282] 0 containers: []
	W1201 22:06:50.425882  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:50.425891  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:50.425902  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:50.505130  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:50.505172  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:50.522562  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:50.522611  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:50.593892  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:50.593914  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:50.593929  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:50.634459  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:50.634497  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:53.183477  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:53.197657  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:53.197730  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:53.235416  661844 cri.go:89] found id: ""
	I1201 22:06:53.235440  661844 logs.go:282] 0 containers: []
	W1201 22:06:53.235448  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:53.235454  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:53.235514  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:53.264835  661844 cri.go:89] found id: ""
	I1201 22:06:53.264860  661844 logs.go:282] 0 containers: []
	W1201 22:06:53.264869  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:53.264876  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:53.264937  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:53.291710  661844 cri.go:89] found id: ""
	I1201 22:06:53.291737  661844 logs.go:282] 0 containers: []
	W1201 22:06:53.291747  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:53.291754  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:53.291817  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:53.318542  661844 cri.go:89] found id: ""
	I1201 22:06:53.318568  661844 logs.go:282] 0 containers: []
	W1201 22:06:53.318577  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:53.318583  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:53.318640  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:53.344912  661844 cri.go:89] found id: ""
	I1201 22:06:53.344935  661844 logs.go:282] 0 containers: []
	W1201 22:06:53.344943  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:53.344950  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:53.345018  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:53.371981  661844 cri.go:89] found id: ""
	I1201 22:06:53.372003  661844 logs.go:282] 0 containers: []
	W1201 22:06:53.372011  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:53.372018  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:53.372077  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:53.398473  661844 cri.go:89] found id: ""
	I1201 22:06:53.398495  661844 logs.go:282] 0 containers: []
	W1201 22:06:53.398504  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:53.398510  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:53.398574  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:53.427627  661844 cri.go:89] found id: ""
	I1201 22:06:53.427650  661844 logs.go:282] 0 containers: []
	W1201 22:06:53.427658  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:53.427667  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:53.427679  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:53.493929  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:53.493964  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:53.512432  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:53.512463  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:53.580205  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:53.580227  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:53.580241  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:53.623683  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:53.623728  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:56.156432  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:56.167915  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:56.167996  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:56.197207  661844 cri.go:89] found id: ""
	I1201 22:06:56.197234  661844 logs.go:282] 0 containers: []
	W1201 22:06:56.197244  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:56.197252  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:56.197322  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:56.225980  661844 cri.go:89] found id: ""
	I1201 22:06:56.226010  661844 logs.go:282] 0 containers: []
	W1201 22:06:56.226021  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:56.226030  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:56.226100  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:56.254017  661844 cri.go:89] found id: ""
	I1201 22:06:56.254044  661844 logs.go:282] 0 containers: []
	W1201 22:06:56.254052  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:56.254060  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:56.254124  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:56.282338  661844 cri.go:89] found id: ""
	I1201 22:06:56.282367  661844 logs.go:282] 0 containers: []
	W1201 22:06:56.282377  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:56.282385  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:56.282455  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:56.310105  661844 cri.go:89] found id: ""
	I1201 22:06:56.310133  661844 logs.go:282] 0 containers: []
	W1201 22:06:56.310142  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:56.310151  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:56.310219  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:56.339117  661844 cri.go:89] found id: ""
	I1201 22:06:56.339180  661844 logs.go:282] 0 containers: []
	W1201 22:06:56.339191  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:56.339199  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:56.339281  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:56.366522  661844 cri.go:89] found id: ""
	I1201 22:06:56.366560  661844 logs.go:282] 0 containers: []
	W1201 22:06:56.366572  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:56.366579  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:56.366650  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:56.393614  661844 cri.go:89] found id: ""
	I1201 22:06:56.393637  661844 logs.go:282] 0 containers: []
	W1201 22:06:56.393646  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:56.393655  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:56.393668  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:56.461202  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:56.461241  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:06:56.479696  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:56.479728  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:56.554355  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:56.554382  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:56.554430  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:56.598679  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:56.598727  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:59.131295  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:06:59.142320  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:06:59.142390  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:06:59.169944  661844 cri.go:89] found id: ""
	I1201 22:06:59.169966  661844 logs.go:282] 0 containers: []
	W1201 22:06:59.169975  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:06:59.169981  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:06:59.170041  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:06:59.202773  661844 cri.go:89] found id: ""
	I1201 22:06:59.202796  661844 logs.go:282] 0 containers: []
	W1201 22:06:59.202805  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:06:59.202812  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:06:59.202874  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:06:59.229782  661844 cri.go:89] found id: ""
	I1201 22:06:59.229804  661844 logs.go:282] 0 containers: []
	W1201 22:06:59.229820  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:06:59.229827  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:06:59.229889  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:06:59.256881  661844 cri.go:89] found id: ""
	I1201 22:06:59.256906  661844 logs.go:282] 0 containers: []
	W1201 22:06:59.256915  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:06:59.256922  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:06:59.256984  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:06:59.286940  661844 cri.go:89] found id: ""
	I1201 22:06:59.286968  661844 logs.go:282] 0 containers: []
	W1201 22:06:59.286993  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:06:59.287001  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:06:59.287065  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:06:59.316788  661844 cri.go:89] found id: ""
	I1201 22:06:59.316812  661844 logs.go:282] 0 containers: []
	W1201 22:06:59.316821  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:06:59.316828  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:06:59.316898  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:06:59.350928  661844 cri.go:89] found id: ""
	I1201 22:06:59.350950  661844 logs.go:282] 0 containers: []
	W1201 22:06:59.350959  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:06:59.350966  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:06:59.351037  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:06:59.378285  661844 cri.go:89] found id: ""
	I1201 22:06:59.378309  661844 logs.go:282] 0 containers: []
	W1201 22:06:59.378318  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:06:59.378326  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:06:59.378339  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:06:59.450194  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:06:59.450214  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:06:59.450227  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:06:59.491115  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:06:59.491172  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:06:59.523259  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:06:59.523291  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:06:59.592930  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:06:59.592974  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:07:02.111719  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:07:02.123155  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:07:02.123231  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:07:02.151434  661844 cri.go:89] found id: ""
	I1201 22:07:02.151460  661844 logs.go:282] 0 containers: []
	W1201 22:07:02.151481  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:07:02.151489  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:07:02.151559  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:07:02.182419  661844 cri.go:89] found id: ""
	I1201 22:07:02.182441  661844 logs.go:282] 0 containers: []
	W1201 22:07:02.182449  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:07:02.182456  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:07:02.182515  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:07:02.217164  661844 cri.go:89] found id: ""
	I1201 22:07:02.217189  661844 logs.go:282] 0 containers: []
	W1201 22:07:02.217198  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:07:02.217204  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:07:02.217277  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:07:02.245693  661844 cri.go:89] found id: ""
	I1201 22:07:02.245717  661844 logs.go:282] 0 containers: []
	W1201 22:07:02.245727  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:07:02.245733  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:07:02.245845  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:07:02.274176  661844 cri.go:89] found id: ""
	I1201 22:07:02.274199  661844 logs.go:282] 0 containers: []
	W1201 22:07:02.274207  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:07:02.274217  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:07:02.274279  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:07:02.302378  661844 cri.go:89] found id: ""
	I1201 22:07:02.302401  661844 logs.go:282] 0 containers: []
	W1201 22:07:02.302409  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:07:02.302416  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:07:02.302486  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:07:02.329904  661844 cri.go:89] found id: ""
	I1201 22:07:02.329975  661844 logs.go:282] 0 containers: []
	W1201 22:07:02.330004  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:07:02.330025  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:07:02.330150  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:07:02.356177  661844 cri.go:89] found id: ""
	I1201 22:07:02.356204  661844 logs.go:282] 0 containers: []
	W1201 22:07:02.356213  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:07:02.356222  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:07:02.356235  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:07:02.427953  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:07:02.427989  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:07:02.445472  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:07:02.445502  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:07:02.535256  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:07:02.535342  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:07:02.535374  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:07:02.589162  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:07:02.589196  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:07:05.134467  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:07:05.146543  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:07:05.146623  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:07:05.177830  661844 cri.go:89] found id: ""
	I1201 22:07:05.177854  661844 logs.go:282] 0 containers: []
	W1201 22:07:05.177864  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:07:05.177871  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:07:05.177942  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:07:05.215997  661844 cri.go:89] found id: ""
	I1201 22:07:05.216022  661844 logs.go:282] 0 containers: []
	W1201 22:07:05.216031  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:07:05.216038  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:07:05.216101  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:07:05.246865  661844 cri.go:89] found id: ""
	I1201 22:07:05.246894  661844 logs.go:282] 0 containers: []
	W1201 22:07:05.246904  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:07:05.246911  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:07:05.246995  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:07:05.276862  661844 cri.go:89] found id: ""
	I1201 22:07:05.276888  661844 logs.go:282] 0 containers: []
	W1201 22:07:05.276898  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:07:05.276905  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:07:05.276970  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:07:05.307043  661844 cri.go:89] found id: ""
	I1201 22:07:05.307068  661844 logs.go:282] 0 containers: []
	W1201 22:07:05.307078  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:07:05.307085  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:07:05.307171  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:07:05.333507  661844 cri.go:89] found id: ""
	I1201 22:07:05.333529  661844 logs.go:282] 0 containers: []
	W1201 22:07:05.333538  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:07:05.333545  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:07:05.333636  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:07:05.364560  661844 cri.go:89] found id: ""
	I1201 22:07:05.364587  661844 logs.go:282] 0 containers: []
	W1201 22:07:05.364599  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:07:05.364607  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:07:05.364722  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:07:05.396473  661844 cri.go:89] found id: ""
	I1201 22:07:05.396496  661844 logs.go:282] 0 containers: []
	W1201 22:07:05.396507  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:07:05.396517  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:07:05.396529  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:07:05.463351  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:07:05.463387  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:07:05.480330  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:07:05.480360  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:07:05.547862  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:07:05.547939  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:07:05.547969  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:07:05.588875  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:07:05.588910  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1201 22:07:08.120384  661844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:07:08.132174  661844 kubeadm.go:602] duration metric: took 4m2.924818854s to restartPrimaryControlPlane
	W1201 22:07:08.132241  661844 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1201 22:07:08.132309  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 22:07:08.541545  661844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:07:08.555077  661844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 22:07:08.563488  661844 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 22:07:08.563561  661844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 22:07:08.572064  661844 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 22:07:08.572087  661844 kubeadm.go:158] found existing configuration files:
	
	I1201 22:07:08.572138  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 22:07:08.580757  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 22:07:08.580832  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 22:07:08.589124  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 22:07:08.597862  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 22:07:08.597930  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 22:07:08.606536  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 22:07:08.615851  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 22:07:08.615927  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 22:07:08.624285  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 22:07:08.632992  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 22:07:08.633067  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 22:07:08.641417  661844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 22:07:08.762458  661844 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 22:07:08.762891  661844 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 22:07:08.842872  661844 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 22:11:10.316036  661844 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 22:11:10.316076  661844 kubeadm.go:319] 
	I1201 22:11:10.316145  661844 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 22:11:10.320449  661844 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 22:11:10.320516  661844 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 22:11:10.320610  661844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 22:11:10.320673  661844 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 22:11:10.320716  661844 kubeadm.go:319] OS: Linux
	I1201 22:11:10.320766  661844 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 22:11:10.320818  661844 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 22:11:10.320870  661844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 22:11:10.320923  661844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 22:11:10.320974  661844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 22:11:10.321027  661844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 22:11:10.321076  661844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 22:11:10.321128  661844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 22:11:10.321178  661844 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 22:11:10.321262  661844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 22:11:10.321360  661844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 22:11:10.321449  661844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 22:11:10.321512  661844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 22:11:10.324392  661844 out.go:252]   - Generating certificates and keys ...
	I1201 22:11:10.324484  661844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 22:11:10.324546  661844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 22:11:10.324619  661844 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 22:11:10.324676  661844 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 22:11:10.324742  661844 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 22:11:10.324793  661844 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 22:11:10.324853  661844 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 22:11:10.324911  661844 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 22:11:10.324981  661844 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 22:11:10.325050  661844 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 22:11:10.325086  661844 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 22:11:10.325138  661844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 22:11:10.325187  661844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 22:11:10.325241  661844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 22:11:10.325296  661844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 22:11:10.325367  661844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 22:11:10.325421  661844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 22:11:10.325502  661844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 22:11:10.325565  661844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 22:11:10.328773  661844 out.go:252]   - Booting up control plane ...
	I1201 22:11:10.328958  661844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 22:11:10.329097  661844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 22:11:10.329214  661844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 22:11:10.329357  661844 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 22:11:10.329469  661844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 22:11:10.329584  661844 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 22:11:10.329675  661844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 22:11:10.329717  661844 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 22:11:10.329861  661844 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 22:11:10.329981  661844 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 22:11:10.330052  661844 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000610913s
	I1201 22:11:10.330056  661844 kubeadm.go:319] 
	I1201 22:11:10.330117  661844 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 22:11:10.330152  661844 kubeadm.go:319] 	- The kubelet is not running
	I1201 22:11:10.330279  661844 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 22:11:10.330284  661844 kubeadm.go:319] 
	I1201 22:11:10.330397  661844 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 22:11:10.330431  661844 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 22:11:10.330468  661844 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1201 22:11:10.330580  661844 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000610913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000610913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1201 22:11:10.330659  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 22:11:10.331119  661844 kubeadm.go:319] 
	I1201 22:11:10.752785  661844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:11:10.769471  661844 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 22:11:10.769541  661844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 22:11:10.783355  661844 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 22:11:10.783376  661844 kubeadm.go:158] found existing configuration files:
	
	I1201 22:11:10.783429  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 22:11:10.793807  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 22:11:10.793874  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 22:11:10.803377  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 22:11:10.815256  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 22:11:10.815327  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 22:11:10.825831  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 22:11:10.836521  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 22:11:10.836589  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 22:11:10.845777  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 22:11:10.857065  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 22:11:10.857130  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 22:11:10.866303  661844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 22:11:10.928739  661844 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 22:11:10.929148  661844 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 22:11:11.034840  661844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 22:11:11.034916  661844 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 22:11:11.034955  661844 kubeadm.go:319] OS: Linux
	I1201 22:11:11.035001  661844 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 22:11:11.035051  661844 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 22:11:11.035099  661844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 22:11:11.035167  661844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 22:11:11.035229  661844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 22:11:11.035284  661844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 22:11:11.035330  661844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 22:11:11.035379  661844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 22:11:11.035425  661844 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 22:11:11.138265  661844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 22:11:11.138390  661844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 22:11:11.138483  661844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 22:11:11.161961  661844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 22:11:11.166168  661844 out.go:252]   - Generating certificates and keys ...
	I1201 22:11:11.166264  661844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 22:11:11.166354  661844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 22:11:11.166440  661844 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 22:11:11.166508  661844 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 22:11:11.166582  661844 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 22:11:11.166642  661844 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 22:11:11.166708  661844 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 22:11:11.166773  661844 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 22:11:11.166851  661844 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 22:11:11.167374  661844 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 22:11:11.167726  661844 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 22:11:11.167794  661844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 22:11:11.466210  661844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 22:11:11.596073  661844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 22:11:12.024697  661844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 22:11:12.396856  661844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 22:11:12.497800  661844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 22:11:12.499750  661844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 22:11:12.502357  661844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 22:11:12.505867  661844 out.go:252]   - Booting up control plane ...
	I1201 22:11:12.505975  661844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 22:11:12.506059  661844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 22:11:12.506130  661844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 22:11:12.524721  661844 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 22:11:12.525145  661844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 22:11:12.538479  661844 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 22:11:12.538581  661844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 22:11:12.538622  661844 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 22:11:12.719115  661844 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 22:11:12.719297  661844 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 22:15:12.719724  661844 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000974446s
	I1201 22:15:12.719759  661844 kubeadm.go:319] 
	I1201 22:15:12.719817  661844 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 22:15:12.719850  661844 kubeadm.go:319] 	- The kubelet is not running
	I1201 22:15:12.719955  661844 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 22:15:12.719961  661844 kubeadm.go:319] 
	I1201 22:15:12.720065  661844 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 22:15:12.720097  661844 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 22:15:12.720128  661844 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 22:15:12.720132  661844 kubeadm.go:319] 
	I1201 22:15:12.724042  661844 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 22:15:12.724470  661844 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 22:15:12.724584  661844 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 22:15:12.724824  661844 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 22:15:12.724835  661844 kubeadm.go:319] 
	I1201 22:15:12.724906  661844 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 22:15:12.724973  661844 kubeadm.go:403] duration metric: took 12m7.587019156s to StartCluster
	I1201 22:15:12.725016  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:15:12.725092  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:15:12.751306  661844 cri.go:89] found id: ""
	I1201 22:15:12.751332  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.751341  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:15:12.751348  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:15:12.751434  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:15:12.785423  661844 cri.go:89] found id: ""
	I1201 22:15:12.785448  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.785458  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:15:12.785464  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:15:12.785523  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:15:12.813047  661844 cri.go:89] found id: ""
	I1201 22:15:12.813078  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.813087  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:15:12.813093  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:15:12.813155  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:15:12.842568  661844 cri.go:89] found id: ""
	I1201 22:15:12.842651  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.842679  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:15:12.842716  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:15:12.842805  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:15:12.869087  661844 cri.go:89] found id: ""
	I1201 22:15:12.869111  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.869121  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:15:12.869127  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:15:12.869189  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:15:12.896165  661844 cri.go:89] found id: ""
	I1201 22:15:12.896188  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.896197  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:15:12.896204  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:15:12.896265  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:15:12.924179  661844 cri.go:89] found id: ""
	I1201 22:15:12.924206  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.924216  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:15:12.924222  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:15:12.924286  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:15:12.950180  661844 cri.go:89] found id: ""
	I1201 22:15:12.950207  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.950216  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:15:12.950225  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:15:12.950245  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:15:13.019062  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:15:13.019107  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:15:13.041962  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:15:13.042018  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:15:13.114819  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:15:13.114842  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:15:13.114854  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:15:13.160628  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:15:13.160674  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1201 22:15:13.195652  661844 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000974446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1201 22:15:13.195715  661844 out.go:285] * 
	* 
	W1201 22:15:13.195881  661844 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000974446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000974446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 22:15:13.195902  661844 out.go:285] * 
	* 
	W1201 22:15:13.198169  661844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 22:15:13.203333  661844 out.go:203] 
	W1201 22:15:13.206184  661844 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000974446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000974446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 22:15:13.206243  661844 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1201 22:15:13.206264  661844 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1201 22:15:13.209359  661844 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-738753 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-738753 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-738753 version --output=json: exit status 1 (102.65895ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-01 22:15:13.780956696 +0000 UTC m=+5858.506758212
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-738753
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-738753:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "baf867679d34a2038f298d8db6709b2abbebfef5f9af57804a463d9961c37e5b",
	        "Created": "2025-12-01T22:02:04.884428973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 662358,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T22:02:37.539668051Z",
	            "FinishedAt": "2025-12-01T22:02:36.032011081Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/baf867679d34a2038f298d8db6709b2abbebfef5f9af57804a463d9961c37e5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/baf867679d34a2038f298d8db6709b2abbebfef5f9af57804a463d9961c37e5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/baf867679d34a2038f298d8db6709b2abbebfef5f9af57804a463d9961c37e5b/hosts",
	        "LogPath": "/var/lib/docker/containers/baf867679d34a2038f298d8db6709b2abbebfef5f9af57804a463d9961c37e5b/baf867679d34a2038f298d8db6709b2abbebfef5f9af57804a463d9961c37e5b-json.log",
	        "Name": "/kubernetes-upgrade-738753",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-738753:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-738753",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "baf867679d34a2038f298d8db6709b2abbebfef5f9af57804a463d9961c37e5b",
	                "LowerDir": "/var/lib/docker/overlay2/877360f48df1bc4310953dd079a19f64f2325eb45aa83fd714cc5ea45dc7aa75-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/877360f48df1bc4310953dd079a19f64f2325eb45aa83fd714cc5ea45dc7aa75/merged",
	                "UpperDir": "/var/lib/docker/overlay2/877360f48df1bc4310953dd079a19f64f2325eb45aa83fd714cc5ea45dc7aa75/diff",
	                "WorkDir": "/var/lib/docker/overlay2/877360f48df1bc4310953dd079a19f64f2325eb45aa83fd714cc5ea45dc7aa75/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-738753",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-738753/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-738753",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-738753",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-738753",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cce3119d39a68dc5e3033b12b3416e7f25ada2c92d2c50def89f5bd9db69549",
	            "SandboxKey": "/var/run/docker/netns/5cce3119d39a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-738753": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:ba:a2:00:75:24",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fb9d6fa7d35fe65c2e5a6bc0f31ba17c1a20c5a1b49ec633dac2b0d2247aedab",
	                    "EndpointID": "9aaacaeb337792f1a49d5f66a8b64b560467a86a68c1c620f2dfc0f45382696f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-738753",
	                        "baf867679d34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-738753 -n kubernetes-upgrade-738753
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-738753 -n kubernetes-upgrade-738753: exit status 2 (336.772787ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-738753 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-390527 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo systemctl status docker --all --full --no-pager                                      │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo systemctl cat docker --no-pager                                                      │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo cat /etc/docker/daemon.json                                                          │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo docker system info                                                                   │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo cri-dockerd --version                                                                │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo systemctl cat containerd --no-pager                                                  │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo cat /etc/containerd/config.toml                                                      │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo containerd config dump                                                               │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo systemctl status crio --all --full --no-pager                                        │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo systemctl cat crio --no-pager                                                        │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ ssh     │ -p cilium-390527 sudo crio config                                                                          │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │                     │
	│ delete  │ -p cilium-390527                                                                                           │ cilium-390527            │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │ 01 Dec 25 22:12 UTC │
	│ start   │ -p force-systemd-env-065520 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-065520 │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │ 01 Dec 25 22:12 UTC │
	│ delete  │ -p force-systemd-env-065520                                                                                │ force-systemd-env-065520 │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │ 01 Dec 25 22:12 UTC │
	│ start   │ -p cert-expiration-663052 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-663052   │ jenkins │ v1.37.0 │ 01 Dec 25 22:12 UTC │ 01 Dec 25 22:13 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 22:12:37
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 22:12:37.682806  700791 out.go:360] Setting OutFile to fd 1 ...
	I1201 22:12:37.682911  700791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:12:37.682915  700791 out.go:374] Setting ErrFile to fd 2...
	I1201 22:12:37.682920  700791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:12:37.683561  700791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 22:12:37.684076  700791 out.go:368] Setting JSON to false
	I1201 22:12:37.685018  700791 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14107,"bootTime":1764613051,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 22:12:37.685087  700791 start.go:143] virtualization:  
	I1201 22:12:37.688927  700791 out.go:179] * [cert-expiration-663052] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 22:12:37.693667  700791 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 22:12:37.693750  700791 notify.go:221] Checking for updates...
	I1201 22:12:37.700588  700791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 22:12:37.703889  700791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 22:12:37.707148  700791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 22:12:37.710271  700791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 22:12:37.713275  700791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 22:12:37.716902  700791 config.go:182] Loaded profile config "kubernetes-upgrade-738753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 22:12:37.717001  700791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 22:12:37.740703  700791 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 22:12:37.740811  700791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 22:12:37.815417  700791 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 22:12:37.805610473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 22:12:37.815512  700791 docker.go:319] overlay module found
	I1201 22:12:37.818743  700791 out.go:179] * Using the docker driver based on user configuration
	I1201 22:12:37.821618  700791 start.go:309] selected driver: docker
	I1201 22:12:37.821627  700791 start.go:927] validating driver "docker" against <nil>
	I1201 22:12:37.821638  700791 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 22:12:37.822406  700791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 22:12:37.890553  700791 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 22:12:37.880516167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 22:12:37.890699  700791 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 22:12:37.890915  700791 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 22:12:37.893981  700791 out.go:179] * Using Docker driver with root privileges
	I1201 22:12:37.896949  700791 cni.go:84] Creating CNI manager for ""
	I1201 22:12:37.897020  700791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 22:12:37.897029  700791 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 22:12:37.897114  700791 start.go:353] cluster config:
	{Name:cert-expiration-663052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-663052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:12:37.900434  700791 out.go:179] * Starting "cert-expiration-663052" primary control-plane node in "cert-expiration-663052" cluster
	I1201 22:12:37.903309  700791 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 22:12:37.906331  700791 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 22:12:37.909234  700791 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 22:12:37.909273  700791 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1201 22:12:37.909282  700791 cache.go:65] Caching tarball of preloaded images
	I1201 22:12:37.909352  700791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 22:12:37.909376  700791 preload.go:238] Found /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1201 22:12:37.909385  700791 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 22:12:37.909509  700791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/config.json ...
	I1201 22:12:37.909527  700791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/config.json: {Name:mk7077871a6d92fe2bd8396f24cec580295963ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:12:37.930838  700791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 22:12:37.930850  700791 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 22:12:37.930870  700791 cache.go:243] Successfully downloaded all kic artifacts
	I1201 22:12:37.930902  700791 start.go:360] acquireMachinesLock for cert-expiration-663052: {Name:mk90d7626aa1825972b74c3f0a4e3624e78bfc84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:12:37.931013  700791 start.go:364] duration metric: took 97.738µs to acquireMachinesLock for "cert-expiration-663052"
	I1201 22:12:37.931038  700791 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-663052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-663052 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 22:12:37.931099  700791 start.go:125] createHost starting for "" (driver="docker")
	I1201 22:12:37.936334  700791 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1201 22:12:37.936590  700791 start.go:159] libmachine.API.Create for "cert-expiration-663052" (driver="docker")
	I1201 22:12:37.936626  700791 client.go:173] LocalClient.Create starting
	I1201 22:12:37.936703  700791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem
	I1201 22:12:37.936743  700791 main.go:143] libmachine: Decoding PEM data...
	I1201 22:12:37.936757  700791 main.go:143] libmachine: Parsing certificate...
	I1201 22:12:37.936818  700791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem
	I1201 22:12:37.936838  700791 main.go:143] libmachine: Decoding PEM data...
	I1201 22:12:37.936847  700791 main.go:143] libmachine: Parsing certificate...
	I1201 22:12:37.937223  700791 cli_runner.go:164] Run: docker network inspect cert-expiration-663052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1201 22:12:37.954086  700791 cli_runner.go:211] docker network inspect cert-expiration-663052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1201 22:12:37.954171  700791 network_create.go:284] running [docker network inspect cert-expiration-663052] to gather additional debugging logs...
	I1201 22:12:37.954187  700791 cli_runner.go:164] Run: docker network inspect cert-expiration-663052
	W1201 22:12:37.970499  700791 cli_runner.go:211] docker network inspect cert-expiration-663052 returned with exit code 1
	I1201 22:12:37.970535  700791 network_create.go:287] error running [docker network inspect cert-expiration-663052]: docker network inspect cert-expiration-663052: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-663052 not found
	I1201 22:12:37.970555  700791 network_create.go:289] output of [docker network inspect cert-expiration-663052]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-663052 not found
	
	** /stderr **
	I1201 22:12:37.970671  700791 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 22:12:37.990255  700791 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8bc0bedecc32 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:01:66:4f:e0:c6} reservation:<nil>}
	I1201 22:12:37.990511  700791 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a6252c828073 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e8:4c:75:77:f0} reservation:<nil>}
	I1201 22:12:37.990781  700791 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-edf8b0d8d0d8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:1c:58:7a:1a:be} reservation:<nil>}
	I1201 22:12:37.991080  700791 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fb9d6fa7d35f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:90:91:91:b9:81} reservation:<nil>}
	I1201 22:12:37.991615  700791 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fc5a0}
	I1201 22:12:37.991641  700791 network_create.go:124] attempt to create docker network cert-expiration-663052 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1201 22:12:37.991699  700791 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-663052 cert-expiration-663052
	I1201 22:12:38.064623  700791 network_create.go:108] docker network cert-expiration-663052 192.168.85.0/24 created
	I1201 22:12:38.064646  700791 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-663052" container
	I1201 22:12:38.064720  700791 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1201 22:12:38.082856  700791 cli_runner.go:164] Run: docker volume create cert-expiration-663052 --label name.minikube.sigs.k8s.io=cert-expiration-663052 --label created_by.minikube.sigs.k8s.io=true
	I1201 22:12:38.104182  700791 oci.go:103] Successfully created a docker volume cert-expiration-663052
	I1201 22:12:38.104267  700791 cli_runner.go:164] Run: docker run --rm --name cert-expiration-663052-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-663052 --entrypoint /usr/bin/test -v cert-expiration-663052:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1201 22:12:38.698955  700791 oci.go:107] Successfully prepared a docker volume cert-expiration-663052
	I1201 22:12:38.699008  700791 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 22:12:38.699016  700791 kic.go:194] Starting extracting preloaded images to volume ...
	I1201 22:12:38.699094  700791 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-663052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1201 22:12:42.847435  700791 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-663052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.148304275s)
	I1201 22:12:42.847461  700791 kic.go:203] duration metric: took 4.148441494s to extract preloaded images to volume ...
	W1201 22:12:42.847616  700791 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1201 22:12:42.847726  700791 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1201 22:12:42.905037  700791 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-663052 --name cert-expiration-663052 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-663052 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-663052 --network cert-expiration-663052 --ip 192.168.85.2 --volume cert-expiration-663052:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1201 22:12:43.228369  700791 cli_runner.go:164] Run: docker container inspect cert-expiration-663052 --format={{.State.Running}}
	I1201 22:12:43.252055  700791 cli_runner.go:164] Run: docker container inspect cert-expiration-663052 --format={{.State.Status}}
	I1201 22:12:43.279957  700791 cli_runner.go:164] Run: docker exec cert-expiration-663052 stat /var/lib/dpkg/alternatives/iptables
	I1201 22:12:43.336918  700791 oci.go:144] the created container "cert-expiration-663052" has a running status.
	I1201 22:12:43.336939  700791 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/cert-expiration-663052/id_rsa...
	I1201 22:12:43.450773  700791 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-482752/.minikube/machines/cert-expiration-663052/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1201 22:12:43.477302  700791 cli_runner.go:164] Run: docker container inspect cert-expiration-663052 --format={{.State.Status}}
	I1201 22:12:43.514287  700791 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1201 22:12:43.514313  700791 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-663052 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1201 22:12:43.632190  700791 cli_runner.go:164] Run: docker container inspect cert-expiration-663052 --format={{.State.Status}}
	I1201 22:12:43.660869  700791 machine.go:94] provisionDockerMachine start ...
	I1201 22:12:43.660957  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:12:43.689150  700791 main.go:143] libmachine: Using SSH client type: native
	I1201 22:12:43.689500  700791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1201 22:12:43.689508  700791 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 22:12:43.692075  700791 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1201 22:12:46.846955  700791 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-663052
	
	I1201 22:12:46.846971  700791 ubuntu.go:182] provisioning hostname "cert-expiration-663052"
	I1201 22:12:46.847034  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:12:46.869749  700791 main.go:143] libmachine: Using SSH client type: native
	I1201 22:12:46.870067  700791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1201 22:12:46.870075  700791 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-663052 && echo "cert-expiration-663052" | sudo tee /etc/hostname
	I1201 22:12:47.030196  700791 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-663052
	
	I1201 22:12:47.030280  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:12:47.048921  700791 main.go:143] libmachine: Using SSH client type: native
	I1201 22:12:47.049231  700791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1201 22:12:47.049246  700791 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-663052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-663052/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-663052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 22:12:47.203749  700791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 22:12:47.203764  700791 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 22:12:47.203782  700791 ubuntu.go:190] setting up certificates
	I1201 22:12:47.203792  700791 provision.go:84] configureAuth start
	I1201 22:12:47.203873  700791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-663052
	I1201 22:12:47.228999  700791 provision.go:143] copyHostCerts
	I1201 22:12:47.229060  700791 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 22:12:47.229068  700791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 22:12:47.229147  700791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 22:12:47.229250  700791 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 22:12:47.229259  700791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 22:12:47.229285  700791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 22:12:47.229368  700791 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 22:12:47.229372  700791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 22:12:47.229398  700791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 22:12:47.229453  700791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-663052 san=[127.0.0.1 192.168.85.2 cert-expiration-663052 localhost minikube]
	I1201 22:12:47.291596  700791 provision.go:177] copyRemoteCerts
	I1201 22:12:47.291656  700791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 22:12:47.291704  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:12:47.309712  700791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/cert-expiration-663052/id_rsa Username:docker}
	I1201 22:12:47.415151  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 22:12:47.433398  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1201 22:12:47.452317  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 22:12:47.470369  700791 provision.go:87] duration metric: took 266.556111ms to configureAuth
	I1201 22:12:47.470388  700791 ubuntu.go:206] setting minikube options for container-runtime
	I1201 22:12:47.470580  700791 config.go:182] Loaded profile config "cert-expiration-663052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 22:12:47.470687  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:12:47.488134  700791 main.go:143] libmachine: Using SSH client type: native
	I1201 22:12:47.488437  700791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1201 22:12:47.488450  700791 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 22:12:47.790569  700791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 22:12:47.790584  700791 machine.go:97] duration metric: took 4.129702903s to provisionDockerMachine
	I1201 22:12:47.790594  700791 client.go:176] duration metric: took 9.853962744s to LocalClient.Create
	I1201 22:12:47.790617  700791 start.go:167] duration metric: took 9.854027678s to libmachine.API.Create "cert-expiration-663052"
	I1201 22:12:47.790623  700791 start.go:293] postStartSetup for "cert-expiration-663052" (driver="docker")
	I1201 22:12:47.790633  700791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 22:12:47.790697  700791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 22:12:47.790735  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:12:47.809819  700791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/cert-expiration-663052/id_rsa Username:docker}
	I1201 22:12:47.919878  700791 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 22:12:47.924485  700791 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 22:12:47.924505  700791 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 22:12:47.924515  700791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 22:12:47.924572  700791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 22:12:47.924653  700791 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 22:12:47.924775  700791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 22:12:47.934130  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 22:12:47.954757  700791 start.go:296] duration metric: took 164.119969ms for postStartSetup
	I1201 22:12:47.955127  700791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-663052
	I1201 22:12:47.977683  700791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/config.json ...
	I1201 22:12:47.977945  700791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 22:12:47.977994  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:12:47.995394  700791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/cert-expiration-663052/id_rsa Username:docker}
	I1201 22:12:48.101023  700791 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 22:12:48.105834  700791 start.go:128] duration metric: took 10.17472099s to createHost
	I1201 22:12:48.105850  700791 start.go:83] releasing machines lock for "cert-expiration-663052", held for 10.174830247s
	I1201 22:12:48.105925  700791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-663052
	I1201 22:12:48.123881  700791 ssh_runner.go:195] Run: cat /version.json
	I1201 22:12:48.123936  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:12:48.123945  700791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 22:12:48.124008  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:12:48.143532  700791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/cert-expiration-663052/id_rsa Username:docker}
	I1201 22:12:48.152798  700791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/cert-expiration-663052/id_rsa Username:docker}
	I1201 22:12:48.247371  700791 ssh_runner.go:195] Run: systemctl --version
	I1201 22:12:48.340721  700791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 22:12:48.379738  700791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 22:12:48.384440  700791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 22:12:48.384508  700791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 22:12:48.415316  700791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1201 22:12:48.415340  700791 start.go:496] detecting cgroup driver to use...
	I1201 22:12:48.415388  700791 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 22:12:48.415457  700791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 22:12:48.433678  700791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 22:12:48.446907  700791 docker.go:218] disabling cri-docker service (if available) ...
	I1201 22:12:48.446962  700791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 22:12:48.465303  700791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 22:12:48.484691  700791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 22:12:48.605615  700791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 22:12:48.762388  700791 docker.go:234] disabling docker service ...
	I1201 22:12:48.762481  700791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 22:12:48.784906  700791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 22:12:48.799016  700791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 22:12:48.913714  700791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 22:12:49.039066  700791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 22:12:49.053646  700791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 22:12:49.068771  700791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 22:12:49.068853  700791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:12:49.077849  700791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 22:12:49.077909  700791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:12:49.087298  700791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:12:49.096114  700791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:12:49.104997  700791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 22:12:49.113744  700791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:12:49.122576  700791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:12:49.136129  700791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:12:49.145465  700791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 22:12:49.153284  700791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 22:12:49.161073  700791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:12:49.275348  700791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 22:12:49.443079  700791 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 22:12:49.443209  700791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 22:12:49.448400  700791 start.go:564] Will wait 60s for crictl version
	I1201 22:12:49.448471  700791 ssh_runner.go:195] Run: which crictl
	I1201 22:12:49.453317  700791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 22:12:49.489307  700791 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 22:12:49.489394  700791 ssh_runner.go:195] Run: crio --version
	I1201 22:12:49.520085  700791 ssh_runner.go:195] Run: crio --version
	I1201 22:12:49.554316  700791 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 22:12:49.557254  700791 cli_runner.go:164] Run: docker network inspect cert-expiration-663052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 22:12:49.572995  700791 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1201 22:12:49.576621  700791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 22:12:49.585908  700791 kubeadm.go:884] updating cluster {Name:cert-expiration-663052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-663052 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 22:12:49.586022  700791 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 22:12:49.586072  700791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 22:12:49.624228  700791 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 22:12:49.624239  700791 crio.go:433] Images already preloaded, skipping extraction
	I1201 22:12:49.624295  700791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 22:12:49.650160  700791 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 22:12:49.650172  700791 cache_images.go:86] Images are preloaded, skipping loading
	I1201 22:12:49.650178  700791 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1201 22:12:49.650257  700791 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-663052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-663052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 22:12:49.650331  700791 ssh_runner.go:195] Run: crio config
	I1201 22:12:49.712854  700791 cni.go:84] Creating CNI manager for ""
	I1201 22:12:49.712867  700791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 22:12:49.712895  700791 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 22:12:49.712932  700791 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-663052 NodeName:cert-expiration-663052 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 22:12:49.713081  700791 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-663052"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 22:12:49.713167  700791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 22:12:49.723015  700791 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 22:12:49.723087  700791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 22:12:49.731562  700791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1201 22:12:49.745270  700791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 22:12:49.758502  700791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1201 22:12:49.771752  700791 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1201 22:12:49.775505  700791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 22:12:49.786087  700791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:12:49.898620  700791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 22:12:49.915053  700791 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052 for IP: 192.168.85.2
	I1201 22:12:49.915077  700791 certs.go:195] generating shared ca certs ...
	I1201 22:12:49.915093  700791 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:12:49.915383  700791 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 22:12:49.915431  700791 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 22:12:49.915438  700791 certs.go:257] generating profile certs ...
	I1201 22:12:49.915505  700791 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/client.key
	I1201 22:12:49.915517  700791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/client.crt with IP's: []
	I1201 22:12:50.411168  700791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/client.crt ...
	I1201 22:12:50.411184  700791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/client.crt: {Name:mk0d06976617ce9e4792f142e98afc4ea5eaf845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:12:50.411395  700791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/client.key ...
	I1201 22:12:50.411403  700791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/client.key: {Name:mkd242b08f5e85e1aefdd478e9b7b0ce629e625f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:12:50.411505  700791 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.key.dc5745a9
	I1201 22:12:50.411519  700791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.crt.dc5745a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1201 22:12:50.667217  700791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.crt.dc5745a9 ...
	I1201 22:12:50.667233  700791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.crt.dc5745a9: {Name:mkd1b964290f24b21ebb8d510e4c617bc6086edf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:12:50.667415  700791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.key.dc5745a9 ...
	I1201 22:12:50.667423  700791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.key.dc5745a9: {Name:mkddb29af206a4ee3da537b5acfd1a20eb0c00fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:12:50.667489  700791 certs.go:382] copying /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.crt.dc5745a9 -> /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.crt
	I1201 22:12:50.667561  700791 certs.go:386] copying /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.key.dc5745a9 -> /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.key
	I1201 22:12:50.667611  700791 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/proxy-client.key
	I1201 22:12:50.667622  700791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/proxy-client.crt with IP's: []
	I1201 22:12:51.087519  700791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/proxy-client.crt ...
	I1201 22:12:51.087539  700791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/proxy-client.crt: {Name:mkf98fbdf016185ee6ad7921c7a5a8e917950eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:12:51.087741  700791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/proxy-client.key ...
	I1201 22:12:51.087749  700791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/proxy-client.key: {Name:mkafdd4231b34fc0f1e6df05175fa5f7f5eff64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:12:51.087956  700791 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 22:12:51.088001  700791 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 22:12:51.088010  700791 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 22:12:51.088036  700791 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 22:12:51.088060  700791 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 22:12:51.088084  700791 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 22:12:51.088131  700791 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 22:12:51.088764  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 22:12:51.107903  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 22:12:51.126322  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 22:12:51.146238  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 22:12:51.165936  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1201 22:12:51.185062  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1201 22:12:51.204509  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 22:12:51.222950  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/cert-expiration-663052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 22:12:51.241257  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 22:12:51.260386  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 22:12:51.278562  700791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 22:12:51.296291  700791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 22:12:51.309142  700791 ssh_runner.go:195] Run: openssl version
	I1201 22:12:51.315281  700791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 22:12:51.323772  700791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 22:12:51.327473  700791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 22:12:51.327531  700791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 22:12:51.369048  700791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 22:12:51.378100  700791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 22:12:51.386791  700791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:12:51.390579  700791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:12:51.390644  700791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:12:51.432036  700791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 22:12:51.440963  700791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 22:12:51.449788  700791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 22:12:51.454224  700791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 22:12:51.454288  700791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 22:12:51.496196  700791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 22:12:51.504739  700791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 22:12:51.508619  700791 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 22:12:51.508679  700791 kubeadm.go:401] StartCluster: {Name:cert-expiration-663052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-663052 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:12:51.508745  700791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 22:12:51.508805  700791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 22:12:51.539207  700791 cri.go:89] found id: ""
	I1201 22:12:51.539298  700791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 22:12:51.547655  700791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 22:12:51.555692  700791 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 22:12:51.555751  700791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 22:12:51.563613  700791 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 22:12:51.563623  700791 kubeadm.go:158] found existing configuration files:
	
	I1201 22:12:51.563676  700791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 22:12:51.571788  700791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 22:12:51.571857  700791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 22:12:51.579378  700791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 22:12:51.587977  700791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 22:12:51.588035  700791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 22:12:51.596102  700791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 22:12:51.604152  700791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 22:12:51.604212  700791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 22:12:51.611840  700791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 22:12:51.619712  700791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 22:12:51.619793  700791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 22:12:51.627489  700791 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 22:12:51.687818  700791 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 22:12:51.688123  700791 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 22:12:51.725391  700791 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 22:12:51.725697  700791 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 22:12:51.725750  700791 kubeadm.go:319] OS: Linux
	I1201 22:12:51.725820  700791 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 22:12:51.725882  700791 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 22:12:51.725948  700791 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 22:12:51.725995  700791 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 22:12:51.726058  700791 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 22:12:51.726129  700791 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 22:12:51.726188  700791 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 22:12:51.726256  700791 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 22:12:51.726319  700791 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 22:12:51.790827  700791 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 22:12:51.790939  700791 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 22:12:51.791027  700791 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 22:12:51.799884  700791 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 22:12:51.806235  700791 out.go:252]   - Generating certificates and keys ...
	I1201 22:12:51.806351  700791 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 22:12:51.806430  700791 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 22:12:52.116907  700791 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 22:12:52.662809  700791 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 22:12:52.875693  700791 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 22:12:53.829435  700791 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 22:12:54.042020  700791 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 22:12:54.042287  700791 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-663052 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1201 22:12:54.885911  700791 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 22:12:54.886197  700791 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-663052 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1201 22:12:57.140717  700791 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 22:12:57.774859  700791 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 22:12:58.933156  700791 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 22:12:58.933393  700791 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 22:13:00.030164  700791 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 22:13:00.982063  700791 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 22:13:01.330444  700791 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 22:13:01.470296  700791 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 22:13:02.206029  700791 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 22:13:02.207408  700791 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 22:13:02.210775  700791 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 22:13:02.216667  700791 out.go:252]   - Booting up control plane ...
	I1201 22:13:02.216776  700791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 22:13:02.216853  700791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 22:13:02.216920  700791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 22:13:02.235418  700791 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 22:13:02.235547  700791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 22:13:02.243872  700791 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 22:13:02.244284  700791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 22:13:02.244509  700791 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 22:13:02.385581  700791 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 22:13:02.385693  700791 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 22:13:02.886454  700791 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.952341ms
	I1201 22:13:02.890275  700791 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 22:13:02.890365  700791 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1201 22:13:02.890453  700791 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 22:13:02.890530  700791 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 22:13:05.082298  700791 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.191543211s
	I1201 22:13:08.532930  700791 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.642653303s
	I1201 22:13:09.892913  700791 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002557468s
	I1201 22:13:09.932507  700791 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 22:13:09.950632  700791 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 22:13:09.965468  700791 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 22:13:09.965665  700791 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-663052 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 22:13:09.979236  700791 kubeadm.go:319] [bootstrap-token] Using token: sq9nnw.tqrtir3bnzmsvmyw
	I1201 22:13:09.984534  700791 out.go:252]   - Configuring RBAC rules ...
	I1201 22:13:09.984698  700791 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 22:13:09.990410  700791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 22:13:09.999291  700791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 22:13:10.020224  700791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 22:13:10.028754  700791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 22:13:10.034012  700791 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 22:13:10.300187  700791 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 22:13:10.741191  700791 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 22:13:11.300808  700791 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 22:13:11.302008  700791 kubeadm.go:319] 
	I1201 22:13:11.302076  700791 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 22:13:11.302080  700791 kubeadm.go:319] 
	I1201 22:13:11.302157  700791 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 22:13:11.302161  700791 kubeadm.go:319] 
	I1201 22:13:11.302185  700791 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 22:13:11.302243  700791 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 22:13:11.302292  700791 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 22:13:11.302295  700791 kubeadm.go:319] 
	I1201 22:13:11.302348  700791 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 22:13:11.302351  700791 kubeadm.go:319] 
	I1201 22:13:11.302397  700791 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 22:13:11.302400  700791 kubeadm.go:319] 
	I1201 22:13:11.302454  700791 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 22:13:11.302528  700791 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 22:13:11.302596  700791 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 22:13:11.302599  700791 kubeadm.go:319] 
	I1201 22:13:11.302683  700791 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 22:13:11.302759  700791 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 22:13:11.302762  700791 kubeadm.go:319] 
	I1201 22:13:11.302845  700791 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sq9nnw.tqrtir3bnzmsvmyw \
	I1201 22:13:11.302968  700791 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ba416c8b9f9df321471bca98b9f543ca561a2f4cf5ae7c15e9cc221036e7ebbc \
	I1201 22:13:11.302988  700791 kubeadm.go:319] 	--control-plane 
	I1201 22:13:11.302990  700791 kubeadm.go:319] 
	I1201 22:13:11.303074  700791 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 22:13:11.303077  700791 kubeadm.go:319] 
	I1201 22:13:11.303194  700791 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sq9nnw.tqrtir3bnzmsvmyw \
	I1201 22:13:11.303313  700791 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ba416c8b9f9df321471bca98b9f543ca561a2f4cf5ae7c15e9cc221036e7ebbc 
	I1201 22:13:11.307749  700791 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1201 22:13:11.307971  700791 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 22:13:11.308074  700791 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 22:13:11.308090  700791 cni.go:84] Creating CNI manager for ""
	I1201 22:13:11.308096  700791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 22:13:11.311320  700791 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 22:13:11.314348  700791 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 22:13:11.319324  700791 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1201 22:13:11.319335  700791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 22:13:11.333398  700791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 22:13:11.631824  700791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 22:13:11.631944  700791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 22:13:11.632032  700791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-663052 minikube.k8s.io/updated_at=2025_12_01T22_13_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=cert-expiration-663052 minikube.k8s.io/primary=true
	I1201 22:13:11.825173  700791 ops.go:34] apiserver oom_adj: -16
	I1201 22:13:11.825196  700791 kubeadm.go:1114] duration metric: took 193.303992ms to wait for elevateKubeSystemPrivileges
	I1201 22:13:11.825210  700791 kubeadm.go:403] duration metric: took 20.316539636s to StartCluster
	I1201 22:13:11.825229  700791 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:13:11.825290  700791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 22:13:11.826177  700791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:13:11.826387  700791 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 22:13:11.826472  700791 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1201 22:13:11.826721  700791 config.go:182] Loaded profile config "cert-expiration-663052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 22:13:11.826787  700791 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 22:13:11.826844  700791 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-663052"
	I1201 22:13:11.826857  700791 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-663052"
	I1201 22:13:11.826877  700791 host.go:66] Checking if "cert-expiration-663052" exists ...
	I1201 22:13:11.827720  700791 cli_runner.go:164] Run: docker container inspect cert-expiration-663052 --format={{.State.Status}}
	I1201 22:13:11.827990  700791 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-663052"
	I1201 22:13:11.828007  700791 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-663052"
	I1201 22:13:11.828307  700791 cli_runner.go:164] Run: docker container inspect cert-expiration-663052 --format={{.State.Status}}
	I1201 22:13:11.830602  700791 out.go:179] * Verifying Kubernetes components...
	I1201 22:13:11.833706  700791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:13:11.876027  700791 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-663052"
	I1201 22:13:11.876056  700791 host.go:66] Checking if "cert-expiration-663052" exists ...
	I1201 22:13:11.876496  700791 cli_runner.go:164] Run: docker container inspect cert-expiration-663052 --format={{.State.Status}}
	I1201 22:13:11.879806  700791 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 22:13:11.882680  700791 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 22:13:11.882693  700791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 22:13:11.882787  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:13:11.914781  700791 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 22:13:11.914794  700791 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 22:13:11.914859  700791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-663052
	I1201 22:13:11.936316  700791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/cert-expiration-663052/id_rsa Username:docker}
	I1201 22:13:11.960490  700791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/cert-expiration-663052/id_rsa Username:docker}
	I1201 22:13:12.197966  700791 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1201 22:13:12.210834  700791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 22:13:12.248925  700791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 22:13:12.300561  700791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 22:13:12.627604  700791 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1201 22:13:12.629628  700791 api_server.go:52] waiting for apiserver process to appear ...
	I1201 22:13:12.629688  700791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:13:12.845558  700791 api_server.go:72] duration metric: took 1.019145986s to wait for apiserver process to appear ...
	I1201 22:13:12.845569  700791 api_server.go:88] waiting for apiserver healthz status ...
	I1201 22:13:12.845584  700791 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 22:13:12.858816  700791 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1201 22:13:12.860110  700791 api_server.go:141] control plane version: v1.34.2
	I1201 22:13:12.860136  700791 api_server.go:131] duration metric: took 14.561941ms to wait for apiserver health ...
	I1201 22:13:12.860144  700791 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 22:13:12.866901  700791 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1201 22:13:12.868003  700791 system_pods.go:59] 5 kube-system pods found
	I1201 22:13:12.868028  700791 system_pods.go:61] "etcd-cert-expiration-663052" [d26109dd-32d3-4f4c-b4c5-f6db1cd8215d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 22:13:12.868036  700791 system_pods.go:61] "kube-apiserver-cert-expiration-663052" [29243b59-d22a-47ad-b90a-a0162dd131fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 22:13:12.868042  700791 system_pods.go:61] "kube-controller-manager-cert-expiration-663052" [599afc72-eafe-4138-83dd-b5bef886950c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 22:13:12.868077  700791 system_pods.go:61] "kube-scheduler-cert-expiration-663052" [e31d7a9f-1b9c-4657-8c30-09ec39b0e0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 22:13:12.868081  700791 system_pods.go:61] "storage-provisioner" [50f0af89-241c-4b20-b93f-a1e8a7b283e7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 22:13:12.868086  700791 system_pods.go:74] duration metric: took 7.937543ms to wait for pod list to return data ...
	I1201 22:13:12.868096  700791 kubeadm.go:587] duration metric: took 1.041689055s to wait for: map[apiserver:true system_pods:true]
	I1201 22:13:12.868108  700791 node_conditions.go:102] verifying NodePressure condition ...
	I1201 22:13:12.869749  700791 addons.go:530] duration metric: took 1.042952059s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1201 22:13:12.870852  700791 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1201 22:13:12.870872  700791 node_conditions.go:123] node cpu capacity is 2
	I1201 22:13:12.870893  700791 node_conditions.go:105] duration metric: took 2.78147ms to run NodePressure ...
	I1201 22:13:12.870904  700791 start.go:242] waiting for startup goroutines ...
	I1201 22:13:13.131763  700791 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-663052" context rescaled to 1 replicas
	I1201 22:13:13.131789  700791 start.go:247] waiting for cluster config update ...
	I1201 22:13:13.131800  700791 start.go:256] writing updated cluster config ...
	I1201 22:13:13.132101  700791 ssh_runner.go:195] Run: rm -f paused
	I1201 22:13:13.194233  700791 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1201 22:13:13.197382  700791 out.go:179] * Done! kubectl is now configured to use "cert-expiration-663052" cluster and "default" namespace by default
	I1201 22:15:12.719724  661844 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000974446s
	I1201 22:15:12.719759  661844 kubeadm.go:319] 
	I1201 22:15:12.719817  661844 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 22:15:12.719850  661844 kubeadm.go:319] 	- The kubelet is not running
	I1201 22:15:12.719955  661844 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 22:15:12.719961  661844 kubeadm.go:319] 
	I1201 22:15:12.720065  661844 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 22:15:12.720097  661844 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 22:15:12.720128  661844 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1201 22:15:12.720132  661844 kubeadm.go:319] 
	I1201 22:15:12.724042  661844 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1201 22:15:12.724470  661844 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1201 22:15:12.724584  661844 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 22:15:12.724824  661844 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 22:15:12.724835  661844 kubeadm.go:319] 
	I1201 22:15:12.724906  661844 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 22:15:12.724973  661844 kubeadm.go:403] duration metric: took 12m7.587019156s to StartCluster
	I1201 22:15:12.725016  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1201 22:15:12.725092  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1201 22:15:12.751306  661844 cri.go:89] found id: ""
	I1201 22:15:12.751332  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.751341  661844 logs.go:284] No container was found matching "kube-apiserver"
	I1201 22:15:12.751348  661844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1201 22:15:12.751434  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1201 22:15:12.785423  661844 cri.go:89] found id: ""
	I1201 22:15:12.785448  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.785458  661844 logs.go:284] No container was found matching "etcd"
	I1201 22:15:12.785464  661844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1201 22:15:12.785523  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1201 22:15:12.813047  661844 cri.go:89] found id: ""
	I1201 22:15:12.813078  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.813087  661844 logs.go:284] No container was found matching "coredns"
	I1201 22:15:12.813093  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1201 22:15:12.813155  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1201 22:15:12.842568  661844 cri.go:89] found id: ""
	I1201 22:15:12.842651  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.842679  661844 logs.go:284] No container was found matching "kube-scheduler"
	I1201 22:15:12.842716  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1201 22:15:12.842805  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1201 22:15:12.869087  661844 cri.go:89] found id: ""
	I1201 22:15:12.869111  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.869121  661844 logs.go:284] No container was found matching "kube-proxy"
	I1201 22:15:12.869127  661844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1201 22:15:12.869189  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1201 22:15:12.896165  661844 cri.go:89] found id: ""
	I1201 22:15:12.896188  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.896197  661844 logs.go:284] No container was found matching "kube-controller-manager"
	I1201 22:15:12.896204  661844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1201 22:15:12.896265  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1201 22:15:12.924179  661844 cri.go:89] found id: ""
	I1201 22:15:12.924206  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.924216  661844 logs.go:284] No container was found matching "kindnet"
	I1201 22:15:12.924222  661844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1201 22:15:12.924286  661844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1201 22:15:12.950180  661844 cri.go:89] found id: ""
	I1201 22:15:12.950207  661844 logs.go:282] 0 containers: []
	W1201 22:15:12.950216  661844 logs.go:284] No container was found matching "storage-provisioner"
	I1201 22:15:12.950225  661844 logs.go:123] Gathering logs for kubelet ...
	I1201 22:15:12.950245  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1201 22:15:13.019062  661844 logs.go:123] Gathering logs for dmesg ...
	I1201 22:15:13.019107  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1201 22:15:13.041962  661844 logs.go:123] Gathering logs for describe nodes ...
	I1201 22:15:13.042018  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1201 22:15:13.114819  661844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1201 22:15:13.114842  661844 logs.go:123] Gathering logs for CRI-O ...
	I1201 22:15:13.114854  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1201 22:15:13.160628  661844 logs.go:123] Gathering logs for container status ...
	I1201 22:15:13.160674  661844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1201 22:15:13.195652  661844 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000974446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1201 22:15:13.195715  661844 out.go:285] * 
	W1201 22:15:13.195881  661844 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000974446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 22:15:13.195902  661844 out.go:285] * 
	W1201 22:15:13.198169  661844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 22:15:13.203333  661844 out.go:203] 
	W1201 22:15:13.206184  661844 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000974446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1201 22:15:13.206243  661844 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1201 22:15:13.206264  661844 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1201 22:15:13.209359  661844 out.go:203] 
	
	
	==> CRI-O <==
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.272920094Z" level=info msg="Neither image nor artfiact registry.k8s.io/etcd:3.6.5-0 found" id=46d8b826-dbaf-4154-b104-a1369f2c2310 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.29767149Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=7bb2af2b-d9ce-4d44-bc1e-f8937b323717 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.297817785Z" level=info msg="Image registry.k8s.io/kube-scheduler:v1.35.0-beta.0 not found" id=7bb2af2b-d9ce-4d44-bc1e-f8937b323717 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.297855216Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-scheduler:v1.35.0-beta.0 found" id=7bb2af2b-d9ce-4d44-bc1e-f8937b323717 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.302242758Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=749bdb32-1527-4b43-93b0-0186686ecf05 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.302418557Z" level=info msg="Image registry.k8s.io/kube-apiserver:v1.35.0-beta.0 not found" id=749bdb32-1527-4b43-93b0-0186686ecf05 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.302465802Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-beta.0 found" id=749bdb32-1527-4b43-93b0-0186686ecf05 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.306302882Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2d245012-5c62-423a-95a9-011cedf28238 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.30644838Z" level=info msg="Image registry.k8s.io/coredns/coredns:v1.13.1 not found" id=2d245012-5c62-423a-95a9-011cedf28238 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:46 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:46.306495592Z" level=info msg="Neither image nor artfiact registry.k8s.io/coredns/coredns:v1.13.1 found" id=2d245012-5c62-423a-95a9-011cedf28238 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:02:47 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:02:47.325001794Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=371ba980-9d47-4ee9-ae0b-57ba73fea967 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:07:08 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:07:08.847599499Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=daa5149c-5375-4c18-86b4-61a154750100 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:07:08 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:07:08.852815338Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cb2c5596-cdf2-4baf-a7f2-55c11e7e152c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:07:08 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:07:08.854367682Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=883c3a44-473e-48f1-ac61-a69859a35f0b name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:07:08 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:07:08.85730599Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8c1ce264-57f2-428a-b34f-6bffcd66ff70 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:07:08 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:07:08.85829152Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=153e27bb-3392-442b-8d21-99176f1b6c09 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:07:08 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:07:08.859850855Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=115a7554-86c6-4f3d-9707-3a8f204d4548 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:07:08 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:07:08.862026882Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=28751469-7b96-4f62-aa05-c79f13446817 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:11:11 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:11:11.144501529Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=72cb4184-a604-432d-a4a4-1c94e217da3c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:11:11 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:11:11.146393894Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=3f3d43cb-2484-4116-8882-205dbc359760 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:11:11 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:11:11.149568267Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=612eabfa-f4ea-4596-ae77-a99ea2def944 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:11:11 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:11:11.15188322Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=fce5804d-89f5-413a-80c5-b928651ff519 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:11:11 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:11:11.153326971Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=e38e0aaf-1253-4aa0-b202-59f5ad5de5d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:11:11 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:11:11.15620837Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9dc22e95-3e0a-4f0c-ab9d-211c5b3b2f9f name=/runtime.v1.ImageService/ImageStatus
	Dec 01 22:11:11 kubernetes-upgrade-738753 crio[615]: time="2025-12-01T22:11:11.158834135Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d731c831-4f40-464b-b45c-795cfde07ed9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.421799] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:41] overlayfs: idmapped layers are currently not supported
	[ +28.971373] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:43] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:48] overlayfs: idmapped layers are currently not supported
	[ +29.317685] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:50] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:51] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:52] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:53] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:54] overlayfs: idmapped layers are currently not supported
	[  +2.710821] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:55] overlayfs: idmapped layers are currently not supported
	[ +23.922036] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:56] overlayfs: idmapped layers are currently not supported
	[ +26.428517] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:58] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:59] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:01] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:02] overlayfs: idmapped layers are currently not supported
	[ +24.384212] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:09] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:11] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:12] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:13] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 22:15:14 up  3:57,  0 user,  load average: 0.71, 1.59, 1.85
	Linux kubernetes-upgrade-738753 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 01 22:15:12 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 22:15:13 kubernetes-upgrade-738753 kubelet[12888]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 22:15:13 kubernetes-upgrade-738753 kubelet[12888]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 22:15:13 kubernetes-upgrade-738753 kubelet[12888]: E1201 22:15:13.251651   12888 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 22:15:13 kubernetes-upgrade-738753 kubelet[12906]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 22:15:13 kubernetes-upgrade-738753 kubelet[12906]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 22:15:13 kubernetes-upgrade-738753 kubelet[12906]: E1201 22:15:13.973583   12906 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 22:15:13 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 01 22:15:14 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 01 22:15:14 kubernetes-upgrade-738753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 22:15:14 kubernetes-upgrade-738753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 22:15:14 kubernetes-upgrade-738753 kubelet[12994]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 22:15:14 kubernetes-upgrade-738753 kubelet[12994]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 01 22:15:14 kubernetes-upgrade-738753 kubelet[12994]: E1201 22:15:14.720850   12994 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 01 22:15:14 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 01 22:15:14 kubernetes-upgrade-738753 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-738753 -n kubernetes-upgrade-738753
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-738753 -n kubernetes-upgrade-738753: exit status 2 (366.524847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-738753" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-738753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-738753
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-738753: (2.2745673s)
--- FAIL: TestKubernetesUpgrade (799.10s)

                                                
                                    
x
+
TestPause/serial/Pause (7.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-188533 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-188533 --alsologtostderr -v=5: exit status 80 (2.300079298s)

                                                
                                                
-- stdout --
	* Pausing node pause-188533 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 22:11:07.144255  692767 out.go:360] Setting OutFile to fd 1 ...
	I1201 22:11:07.145213  692767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:11:07.145227  692767 out.go:374] Setting ErrFile to fd 2...
	I1201 22:11:07.145233  692767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:11:07.145498  692767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 22:11:07.145763  692767 out.go:368] Setting JSON to false
	I1201 22:11:07.145790  692767 mustload.go:66] Loading cluster: pause-188533
	I1201 22:11:07.146215  692767 config.go:182] Loaded profile config "pause-188533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 22:11:07.146682  692767 cli_runner.go:164] Run: docker container inspect pause-188533 --format={{.State.Status}}
	I1201 22:11:07.164964  692767 host.go:66] Checking if "pause-188533" exists ...
	I1201 22:11:07.165297  692767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 22:11:07.235840  692767 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-01 22:11:07.225295787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 22:11:07.236572  692767 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764600683-21997/minikube-v1.37.0-1764600683-21997-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764600683-21997-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-188533 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1201 22:11:07.239482  692767 out.go:179] * Pausing node pause-188533 ... 
	I1201 22:11:07.243336  692767 host.go:66] Checking if "pause-188533" exists ...
	I1201 22:11:07.243726  692767 ssh_runner.go:195] Run: systemctl --version
	I1201 22:11:07.243792  692767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:11:07.263061  692767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:11:07.370150  692767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:11:07.383249  692767 pause.go:52] kubelet running: true
	I1201 22:11:07.383330  692767 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 22:11:07.654959  692767 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 22:11:07.655054  692767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 22:11:07.732776  692767 cri.go:89] found id: "34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b"
	I1201 22:11:07.732799  692767 cri.go:89] found id: "dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723"
	I1201 22:11:07.732804  692767 cri.go:89] found id: "23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660"
	I1201 22:11:07.732808  692767 cri.go:89] found id: "dd06372589408364b8b58065de97cbe55de7a24dfcbd37bfc9b061320d5e4539"
	I1201 22:11:07.732812  692767 cri.go:89] found id: "bebec9756e9fcbf5790b7011256eaa67c0c522c378104d3e1ef3cdef4e894373"
	I1201 22:11:07.732815  692767 cri.go:89] found id: "10a70a6f5dbc1cf1cdce344789e47ea80729baa80aa552017e27ca47b2227324"
	I1201 22:11:07.732818  692767 cri.go:89] found id: "125f9c98221f08aefc5c1d5767531793e6623aafdbbe2788da3f53d0b37c5b5f"
	I1201 22:11:07.732822  692767 cri.go:89] found id: "81d2cfcf9894f9d0a557d8c60957ff9e303266d6d73df814a3f9fa53142a11f0"
	I1201 22:11:07.732825  692767 cri.go:89] found id: "f517468165e6149e74b6caf291fa53acbeda3290845551238b0ba8999a831f3a"
	I1201 22:11:07.732831  692767 cri.go:89] found id: "55f3da13e226d8e308b654b59a220e4446138e3b1a7e31583e355636f80f4a1e"
	I1201 22:11:07.732834  692767 cri.go:89] found id: "ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810"
	I1201 22:11:07.732837  692767 cri.go:89] found id: "829d829e6c58b92ed7c38ab13e650eb88fd6a1b9d086634b309b04cd79ffb2eb"
	I1201 22:11:07.732840  692767 cri.go:89] found id: "9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e"
	I1201 22:11:07.732843  692767 cri.go:89] found id: "7f474f99bf26bbeb5fd1570a1b4d07a037ededb0976f97b7cfaf0397b2fad3c9"
	I1201 22:11:07.732846  692767 cri.go:89] found id: ""
	I1201 22:11:07.732896  692767 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 22:11:07.744786  692767 retry.go:31] will retry after 139.554894ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T22:11:07Z" level=error msg="open /run/runc: no such file or directory"
	I1201 22:11:07.885315  692767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:11:07.899384  692767 pause.go:52] kubelet running: false
	I1201 22:11:07.899471  692767 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 22:11:08.042046  692767 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 22:11:08.042142  692767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 22:11:08.116304  692767 cri.go:89] found id: "34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b"
	I1201 22:11:08.116330  692767 cri.go:89] found id: "dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723"
	I1201 22:11:08.116335  692767 cri.go:89] found id: "23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660"
	I1201 22:11:08.116339  692767 cri.go:89] found id: "dd06372589408364b8b58065de97cbe55de7a24dfcbd37bfc9b061320d5e4539"
	I1201 22:11:08.116342  692767 cri.go:89] found id: "bebec9756e9fcbf5790b7011256eaa67c0c522c378104d3e1ef3cdef4e894373"
	I1201 22:11:08.116346  692767 cri.go:89] found id: "10a70a6f5dbc1cf1cdce344789e47ea80729baa80aa552017e27ca47b2227324"
	I1201 22:11:08.116349  692767 cri.go:89] found id: "125f9c98221f08aefc5c1d5767531793e6623aafdbbe2788da3f53d0b37c5b5f"
	I1201 22:11:08.116352  692767 cri.go:89] found id: "81d2cfcf9894f9d0a557d8c60957ff9e303266d6d73df814a3f9fa53142a11f0"
	I1201 22:11:08.116375  692767 cri.go:89] found id: "f517468165e6149e74b6caf291fa53acbeda3290845551238b0ba8999a831f3a"
	I1201 22:11:08.116393  692767 cri.go:89] found id: "55f3da13e226d8e308b654b59a220e4446138e3b1a7e31583e355636f80f4a1e"
	I1201 22:11:08.116401  692767 cri.go:89] found id: "ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810"
	I1201 22:11:08.116404  692767 cri.go:89] found id: "829d829e6c58b92ed7c38ab13e650eb88fd6a1b9d086634b309b04cd79ffb2eb"
	I1201 22:11:08.116407  692767 cri.go:89] found id: "9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e"
	I1201 22:11:08.116410  692767 cri.go:89] found id: "7f474f99bf26bbeb5fd1570a1b4d07a037ededb0976f97b7cfaf0397b2fad3c9"
	I1201 22:11:08.116413  692767 cri.go:89] found id: ""
	I1201 22:11:08.116473  692767 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 22:11:08.128295  692767 retry.go:31] will retry after 411.725771ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T22:11:08Z" level=error msg="open /run/runc: no such file or directory"
	I1201 22:11:08.541082  692767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:11:08.554523  692767 pause.go:52] kubelet running: false
	I1201 22:11:08.554613  692767 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 22:11:08.704130  692767 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 22:11:08.704211  692767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 22:11:08.778442  692767 cri.go:89] found id: "34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b"
	I1201 22:11:08.778528  692767 cri.go:89] found id: "dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723"
	I1201 22:11:08.778548  692767 cri.go:89] found id: "23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660"
	I1201 22:11:08.778572  692767 cri.go:89] found id: "dd06372589408364b8b58065de97cbe55de7a24dfcbd37bfc9b061320d5e4539"
	I1201 22:11:08.778605  692767 cri.go:89] found id: "bebec9756e9fcbf5790b7011256eaa67c0c522c378104d3e1ef3cdef4e894373"
	I1201 22:11:08.778640  692767 cri.go:89] found id: "10a70a6f5dbc1cf1cdce344789e47ea80729baa80aa552017e27ca47b2227324"
	I1201 22:11:08.778662  692767 cri.go:89] found id: "125f9c98221f08aefc5c1d5767531793e6623aafdbbe2788da3f53d0b37c5b5f"
	I1201 22:11:08.778685  692767 cri.go:89] found id: "81d2cfcf9894f9d0a557d8c60957ff9e303266d6d73df814a3f9fa53142a11f0"
	I1201 22:11:08.778724  692767 cri.go:89] found id: "f517468165e6149e74b6caf291fa53acbeda3290845551238b0ba8999a831f3a"
	I1201 22:11:08.778755  692767 cri.go:89] found id: "55f3da13e226d8e308b654b59a220e4446138e3b1a7e31583e355636f80f4a1e"
	I1201 22:11:08.778777  692767 cri.go:89] found id: "ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810"
	I1201 22:11:08.778799  692767 cri.go:89] found id: "829d829e6c58b92ed7c38ab13e650eb88fd6a1b9d086634b309b04cd79ffb2eb"
	I1201 22:11:08.778831  692767 cri.go:89] found id: "9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e"
	I1201 22:11:08.778858  692767 cri.go:89] found id: "7f474f99bf26bbeb5fd1570a1b4d07a037ededb0976f97b7cfaf0397b2fad3c9"
	I1201 22:11:08.778878  692767 cri.go:89] found id: ""
	I1201 22:11:08.778963  692767 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 22:11:08.790991  692767 retry.go:31] will retry after 325.475947ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T22:11:08Z" level=error msg="open /run/runc: no such file or directory"
	I1201 22:11:09.117617  692767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:11:09.130617  692767 pause.go:52] kubelet running: false
	I1201 22:11:09.130677  692767 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 22:11:09.290430  692767 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 22:11:09.290553  692767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 22:11:09.362374  692767 cri.go:89] found id: "34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b"
	I1201 22:11:09.362448  692767 cri.go:89] found id: "dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723"
	I1201 22:11:09.362471  692767 cri.go:89] found id: "23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660"
	I1201 22:11:09.362494  692767 cri.go:89] found id: "dd06372589408364b8b58065de97cbe55de7a24dfcbd37bfc9b061320d5e4539"
	I1201 22:11:09.362537  692767 cri.go:89] found id: "bebec9756e9fcbf5790b7011256eaa67c0c522c378104d3e1ef3cdef4e894373"
	I1201 22:11:09.362558  692767 cri.go:89] found id: "10a70a6f5dbc1cf1cdce344789e47ea80729baa80aa552017e27ca47b2227324"
	I1201 22:11:09.362579  692767 cri.go:89] found id: "125f9c98221f08aefc5c1d5767531793e6623aafdbbe2788da3f53d0b37c5b5f"
	I1201 22:11:09.362613  692767 cri.go:89] found id: "81d2cfcf9894f9d0a557d8c60957ff9e303266d6d73df814a3f9fa53142a11f0"
	I1201 22:11:09.362633  692767 cri.go:89] found id: "f517468165e6149e74b6caf291fa53acbeda3290845551238b0ba8999a831f3a"
	I1201 22:11:09.362663  692767 cri.go:89] found id: "55f3da13e226d8e308b654b59a220e4446138e3b1a7e31583e355636f80f4a1e"
	I1201 22:11:09.362695  692767 cri.go:89] found id: "ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810"
	I1201 22:11:09.362724  692767 cri.go:89] found id: "829d829e6c58b92ed7c38ab13e650eb88fd6a1b9d086634b309b04cd79ffb2eb"
	I1201 22:11:09.362745  692767 cri.go:89] found id: "9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e"
	I1201 22:11:09.362775  692767 cri.go:89] found id: "7f474f99bf26bbeb5fd1570a1b4d07a037ededb0976f97b7cfaf0397b2fad3c9"
	I1201 22:11:09.362791  692767 cri.go:89] found id: ""
	I1201 22:11:09.362878  692767 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 22:11:09.378316  692767 out.go:203] 
	W1201 22:11:09.382541  692767 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T22:11:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T22:11:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 22:11:09.382571  692767 out.go:285] * 
	* 
	W1201 22:11:09.390527  692767 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 22:11:09.393614  692767 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-188533 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-188533
helpers_test.go:243: (dbg) docker inspect pause-188533:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15",
	        "Created": "2025-12-01T22:09:22.266784657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 688889,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T22:09:22.33326856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15/hostname",
	        "HostsPath": "/var/lib/docker/containers/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15/hosts",
	        "LogPath": "/var/lib/docker/containers/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15-json.log",
	        "Name": "/pause-188533",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-188533:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-188533",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15",
	                "LowerDir": "/var/lib/docker/overlay2/da6d7a6f99a949509f6893d10c827324af7dde87ebdf51a6a5e3471d94cd312c-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da6d7a6f99a949509f6893d10c827324af7dde87ebdf51a6a5e3471d94cd312c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da6d7a6f99a949509f6893d10c827324af7dde87ebdf51a6a5e3471d94cd312c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da6d7a6f99a949509f6893d10c827324af7dde87ebdf51a6a5e3471d94cd312c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-188533",
	                "Source": "/var/lib/docker/volumes/pause-188533/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-188533",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-188533",
	                "name.minikube.sigs.k8s.io": "pause-188533",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa85ef5b998c2ec06e1d027e56c8c8421693aca39f30c12d9128156dcc286711",
	            "SandboxKey": "/var/run/docker/netns/aa85ef5b998c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-188533": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:2a:b2:9d:e4:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8eb773d1b4921a3f23af0eb8ba4d602ded05f859cfc5259a39cee841753a40e2",
	                    "EndpointID": "834500a0d9f0f5caa17339584e46f145c9dbb5fc0c42a9b232c013658ff04eb0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-188533",
	                        "56bed8e2da48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-188533 -n pause-188533
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-188533 -n pause-188533: exit status 2 (384.971833ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-188533 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-188533 logs -n 25: (1.655273775s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-516822 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:00 UTC │ 01 Dec 25 22:01 UTC │
	│ start   │ -p missing-upgrade-152595 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-152595    │ jenkins │ v1.35.0 │ 01 Dec 25 22:00 UTC │ 01 Dec 25 22:02 UTC │
	│ start   │ -p NoKubernetes-516822 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ delete  │ -p NoKubernetes-516822                                                                                                                          │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ start   │ -p NoKubernetes-516822 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ ssh     │ -p NoKubernetes-516822 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │                     │
	│ stop    │ -p NoKubernetes-516822                                                                                                                          │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ start   │ -p NoKubernetes-516822 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ ssh     │ -p NoKubernetes-516822 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │                     │
	│ delete  │ -p NoKubernetes-516822                                                                                                                          │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ start   │ -p kubernetes-upgrade-738753 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-738753 │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:02 UTC │
	│ start   │ -p missing-upgrade-152595 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-152595    │ jenkins │ v1.37.0 │ 01 Dec 25 22:02 UTC │ 01 Dec 25 22:03 UTC │
	│ stop    │ -p kubernetes-upgrade-738753                                                                                                                    │ kubernetes-upgrade-738753 │ jenkins │ v1.37.0 │ 01 Dec 25 22:02 UTC │ 01 Dec 25 22:02 UTC │
	│ start   │ -p kubernetes-upgrade-738753 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-738753 │ jenkins │ v1.37.0 │ 01 Dec 25 22:02 UTC │                     │
	│ delete  │ -p missing-upgrade-152595                                                                                                                       │ missing-upgrade-152595    │ jenkins │ v1.37.0 │ 01 Dec 25 22:03 UTC │ 01 Dec 25 22:03 UTC │
	│ start   │ -p stopped-upgrade-952426 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-952426    │ jenkins │ v1.35.0 │ 01 Dec 25 22:03 UTC │ 01 Dec 25 22:03 UTC │
	│ stop    │ stopped-upgrade-952426 stop                                                                                                                     │ stopped-upgrade-952426    │ jenkins │ v1.35.0 │ 01 Dec 25 22:03 UTC │ 01 Dec 25 22:03 UTC │
	│ start   │ -p stopped-upgrade-952426 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-952426    │ jenkins │ v1.37.0 │ 01 Dec 25 22:03 UTC │ 01 Dec 25 22:08 UTC │
	│ delete  │ -p stopped-upgrade-952426                                                                                                                       │ stopped-upgrade-952426    │ jenkins │ v1.37.0 │ 01 Dec 25 22:08 UTC │ 01 Dec 25 22:08 UTC │
	│ start   │ -p running-upgrade-976949 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-976949    │ jenkins │ v1.35.0 │ 01 Dec 25 22:08 UTC │ 01 Dec 25 22:08 UTC │
	│ start   │ -p running-upgrade-976949 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-976949    │ jenkins │ v1.37.0 │ 01 Dec 25 22:08 UTC │ 01 Dec 25 22:09 UTC │
	│ delete  │ -p running-upgrade-976949                                                                                                                       │ running-upgrade-976949    │ jenkins │ v1.37.0 │ 01 Dec 25 22:09 UTC │ 01 Dec 25 22:09 UTC │
	│ start   │ -p pause-188533 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-188533              │ jenkins │ v1.37.0 │ 01 Dec 25 22:09 UTC │ 01 Dec 25 22:10 UTC │
	│ start   │ -p pause-188533 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-188533              │ jenkins │ v1.37.0 │ 01 Dec 25 22:10 UTC │ 01 Dec 25 22:11 UTC │
	│ pause   │ -p pause-188533 --alsologtostderr -v=5                                                                                                          │ pause-188533              │ jenkins │ v1.37.0 │ 01 Dec 25 22:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 22:10:40
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 22:10:40.783647  691475 out.go:360] Setting OutFile to fd 1 ...
	I1201 22:10:40.783853  691475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:10:40.783887  691475 out.go:374] Setting ErrFile to fd 2...
	I1201 22:10:40.783914  691475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:10:40.784312  691475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 22:10:40.784801  691475 out.go:368] Setting JSON to false
	I1201 22:10:40.786146  691475 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":13990,"bootTime":1764613051,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 22:10:40.786233  691475 start.go:143] virtualization:  
	I1201 22:10:40.789151  691475 out.go:179] * [pause-188533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 22:10:40.792991  691475 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 22:10:40.793157  691475 notify.go:221] Checking for updates...
	I1201 22:10:40.798720  691475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 22:10:40.801724  691475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 22:10:40.804566  691475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 22:10:40.807368  691475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 22:10:40.810244  691475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 22:10:40.813718  691475 config.go:182] Loaded profile config "pause-188533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 22:10:40.814324  691475 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 22:10:40.853471  691475 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 22:10:40.853589  691475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 22:10:40.931193  691475 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-01 22:10:40.919786129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 22:10:40.931307  691475 docker.go:319] overlay module found
	I1201 22:10:40.934602  691475 out.go:179] * Using the docker driver based on existing profile
	I1201 22:10:40.937515  691475 start.go:309] selected driver: docker
	I1201 22:10:40.937541  691475 start.go:927] validating driver "docker" against &{Name:pause-188533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:10:40.937741  691475 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 22:10:40.937848  691475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 22:10:41.017388  691475 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-01 22:10:41.006274004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 22:10:41.017833  691475 cni.go:84] Creating CNI manager for ""
	I1201 22:10:41.017902  691475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 22:10:41.017953  691475 start.go:353] cluster config:
	{Name:pause-188533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:10:41.021213  691475 out.go:179] * Starting "pause-188533" primary control-plane node in "pause-188533" cluster
	I1201 22:10:41.024099  691475 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 22:10:41.027670  691475 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 22:10:41.031121  691475 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 22:10:41.031441  691475 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1201 22:10:41.031488  691475 cache.go:65] Caching tarball of preloaded images
	I1201 22:10:41.031331  691475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 22:10:41.031864  691475 preload.go:238] Found /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1201 22:10:41.031900  691475 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 22:10:41.032094  691475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/config.json ...
	I1201 22:10:41.054684  691475 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 22:10:41.054714  691475 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 22:10:41.054728  691475 cache.go:243] Successfully downloaded all kic artifacts
	I1201 22:10:41.054762  691475 start.go:360] acquireMachinesLock for pause-188533: {Name:mkffc0f334f79b444589375776f7dd5028fc5c89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:10:41.054826  691475 start.go:364] duration metric: took 38.498µs to acquireMachinesLock for "pause-188533"
	I1201 22:10:41.054848  691475 start.go:96] Skipping create...Using existing machine configuration
	I1201 22:10:41.054862  691475 fix.go:54] fixHost starting: 
	I1201 22:10:41.055125  691475 cli_runner.go:164] Run: docker container inspect pause-188533 --format={{.State.Status}}
	I1201 22:10:41.076041  691475 fix.go:112] recreateIfNeeded on pause-188533: state=Running err=<nil>
	W1201 22:10:41.076075  691475 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 22:10:41.079360  691475 out.go:252] * Updating the running docker "pause-188533" container ...
	I1201 22:10:41.079399  691475 machine.go:94] provisionDockerMachine start ...
	I1201 22:10:41.079502  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:41.097969  691475 main.go:143] libmachine: Using SSH client type: native
	I1201 22:10:41.098341  691475 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1201 22:10:41.098356  691475 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 22:10:41.255942  691475 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-188533
	
	I1201 22:10:41.256065  691475 ubuntu.go:182] provisioning hostname "pause-188533"
	I1201 22:10:41.256170  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:41.274900  691475 main.go:143] libmachine: Using SSH client type: native
	I1201 22:10:41.275305  691475 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1201 22:10:41.275322  691475 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-188533 && echo "pause-188533" | sudo tee /etc/hostname
	I1201 22:10:41.438814  691475 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-188533
	
	I1201 22:10:41.438908  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:41.465705  691475 main.go:143] libmachine: Using SSH client type: native
	I1201 22:10:41.466039  691475 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1201 22:10:41.466075  691475 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-188533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-188533/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-188533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 22:10:41.616151  691475 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 22:10:41.616178  691475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 22:10:41.616223  691475 ubuntu.go:190] setting up certificates
	I1201 22:10:41.616234  691475 provision.go:84] configureAuth start
	I1201 22:10:41.616310  691475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-188533
	I1201 22:10:41.636985  691475 provision.go:143] copyHostCerts
	I1201 22:10:41.637066  691475 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 22:10:41.637085  691475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 22:10:41.637168  691475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 22:10:41.637283  691475 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 22:10:41.637289  691475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 22:10:41.637317  691475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 22:10:41.637381  691475 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 22:10:41.637385  691475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 22:10:41.637410  691475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 22:10:41.637469  691475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.pause-188533 san=[127.0.0.1 192.168.85.2 localhost minikube pause-188533]
	I1201 22:10:42.228698  691475 provision.go:177] copyRemoteCerts
	I1201 22:10:42.228792  691475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 22:10:42.228840  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:42.264597  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:42.376442  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 22:10:42.396674  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 22:10:42.415478  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1201 22:10:42.434458  691475 provision.go:87] duration metric: took 818.199534ms to configureAuth
	I1201 22:10:42.434532  691475 ubuntu.go:206] setting minikube options for container-runtime
	I1201 22:10:42.434765  691475 config.go:182] Loaded profile config "pause-188533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 22:10:42.434876  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:42.454395  691475 main.go:143] libmachine: Using SSH client type: native
	I1201 22:10:42.454705  691475 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1201 22:10:42.454719  691475 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 22:10:47.861712  691475 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 22:10:47.861736  691475 machine.go:97] duration metric: took 6.782327254s to provisionDockerMachine
	I1201 22:10:47.861749  691475 start.go:293] postStartSetup for "pause-188533" (driver="docker")
	I1201 22:10:47.861761  691475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 22:10:47.861828  691475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 22:10:47.861899  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:47.881097  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:47.991841  691475 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 22:10:47.995677  691475 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 22:10:47.995709  691475 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 22:10:47.995721  691475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 22:10:47.995778  691475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 22:10:47.995865  691475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 22:10:47.995987  691475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 22:10:48.004239  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 22:10:48.033891  691475 start.go:296] duration metric: took 172.124566ms for postStartSetup
	I1201 22:10:48.033990  691475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 22:10:48.034056  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:48.053241  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:48.156897  691475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 22:10:48.162184  691475 fix.go:56] duration metric: took 7.107314893s for fixHost
	I1201 22:10:48.162211  691475 start.go:83] releasing machines lock for "pause-188533", held for 7.107374888s
	I1201 22:10:48.162285  691475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-188533
	I1201 22:10:48.180180  691475 ssh_runner.go:195] Run: cat /version.json
	I1201 22:10:48.180237  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:48.180491  691475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 22:10:48.180562  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:48.201352  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:48.204692  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:48.303590  691475 ssh_runner.go:195] Run: systemctl --version
	I1201 22:10:48.398682  691475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 22:10:48.441635  691475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 22:10:48.446212  691475 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 22:10:48.446328  691475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 22:10:48.454676  691475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 22:10:48.454699  691475 start.go:496] detecting cgroup driver to use...
	I1201 22:10:48.454732  691475 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 22:10:48.454783  691475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 22:10:48.471027  691475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 22:10:48.484876  691475 docker.go:218] disabling cri-docker service (if available) ...
	I1201 22:10:48.484949  691475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 22:10:48.502158  691475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 22:10:48.516490  691475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 22:10:48.649818  691475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 22:10:48.816954  691475 docker.go:234] disabling docker service ...
	I1201 22:10:48.817041  691475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 22:10:48.833391  691475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 22:10:48.847742  691475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 22:10:48.991410  691475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 22:10:49.142059  691475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 22:10:49.155999  691475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 22:10:49.171802  691475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 22:10:49.171868  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.181736  691475 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 22:10:49.181905  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.191849  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.201624  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.210975  691475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 22:10:49.219570  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.229546  691475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.238531  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.248271  691475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 22:10:49.256303  691475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 22:10:49.264254  691475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:10:49.403742  691475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 22:10:49.622057  691475 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 22:10:49.622126  691475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 22:10:49.626165  691475 start.go:564] Will wait 60s for crictl version
	I1201 22:10:49.626237  691475 ssh_runner.go:195] Run: which crictl
	I1201 22:10:49.630035  691475 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 22:10:49.655986  691475 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 22:10:49.656091  691475 ssh_runner.go:195] Run: crio --version
	I1201 22:10:49.686776  691475 ssh_runner.go:195] Run: crio --version
	I1201 22:10:49.721385  691475 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 22:10:49.724366  691475 cli_runner.go:164] Run: docker network inspect pause-188533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 22:10:49.741205  691475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1201 22:10:49.745652  691475 kubeadm.go:884] updating cluster {Name:pause-188533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 22:10:49.745820  691475 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 22:10:49.745897  691475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 22:10:49.783601  691475 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 22:10:49.783628  691475 crio.go:433] Images already preloaded, skipping extraction
	I1201 22:10:49.783696  691475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 22:10:49.810926  691475 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 22:10:49.810954  691475 cache_images.go:86] Images are preloaded, skipping loading
	I1201 22:10:49.810963  691475 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1201 22:10:49.811082  691475 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-188533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 22:10:49.811225  691475 ssh_runner.go:195] Run: crio config
	I1201 22:10:49.887737  691475 cni.go:84] Creating CNI manager for ""
	I1201 22:10:49.887765  691475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 22:10:49.887820  691475 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 22:10:49.887853  691475 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-188533 NodeName:pause-188533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 22:10:49.888008  691475 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-188533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 22:10:49.888099  691475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 22:10:49.896829  691475 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 22:10:49.896931  691475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 22:10:49.905075  691475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1201 22:10:49.918332  691475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 22:10:49.931806  691475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1201 22:10:49.944962  691475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1201 22:10:49.948737  691475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:10:50.089576  691475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 22:10:50.104603  691475 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533 for IP: 192.168.85.2
	I1201 22:10:50.104628  691475 certs.go:195] generating shared ca certs ...
	I1201 22:10:50.104645  691475 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:10:50.104875  691475 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 22:10:50.104952  691475 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 22:10:50.104966  691475 certs.go:257] generating profile certs ...
	I1201 22:10:50.105075  691475 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.key
	I1201 22:10:50.105157  691475 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/apiserver.key.1495b8ca
	I1201 22:10:50.105209  691475 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/proxy-client.key
	I1201 22:10:50.105327  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 22:10:50.105373  691475 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 22:10:50.105387  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 22:10:50.105420  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 22:10:50.105453  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 22:10:50.105484  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 22:10:50.105536  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 22:10:50.106142  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 22:10:50.127296  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 22:10:50.147539  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 22:10:50.180070  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 22:10:50.206374  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 22:10:50.231357  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 22:10:50.250462  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 22:10:50.269841  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 22:10:50.288573  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 22:10:50.306806  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 22:10:50.326456  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 22:10:50.345367  691475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 22:10:50.359079  691475 ssh_runner.go:195] Run: openssl version
	I1201 22:10:50.365709  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 22:10:50.380228  691475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 22:10:50.384198  691475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 22:10:50.384303  691475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 22:10:50.427347  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 22:10:50.435554  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 22:10:50.444256  691475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 22:10:50.448228  691475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 22:10:50.448298  691475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 22:10:50.490761  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 22:10:50.499544  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 22:10:50.508732  691475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:10:50.513166  691475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:10:50.513250  691475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:10:50.556332  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 22:10:50.565301  691475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 22:10:50.569307  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 22:10:50.611070  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 22:10:50.653588  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 22:10:50.695453  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 22:10:50.736984  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 22:10:50.778455  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 22:10:50.822944  691475 kubeadm.go:401] StartCluster: {Name:pause-188533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:10:50.823074  691475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 22:10:50.823206  691475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 22:10:50.852284  691475 cri.go:89] found id: "81d2cfcf9894f9d0a557d8c60957ff9e303266d6d73df814a3f9fa53142a11f0"
	I1201 22:10:50.852350  691475 cri.go:89] found id: "f517468165e6149e74b6caf291fa53acbeda3290845551238b0ba8999a831f3a"
	I1201 22:10:50.852362  691475 cri.go:89] found id: "55f3da13e226d8e308b654b59a220e4446138e3b1a7e31583e355636f80f4a1e"
	I1201 22:10:50.852367  691475 cri.go:89] found id: "ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810"
	I1201 22:10:50.852371  691475 cri.go:89] found id: "829d829e6c58b92ed7c38ab13e650eb88fd6a1b9d086634b309b04cd79ffb2eb"
	I1201 22:10:50.852374  691475 cri.go:89] found id: "9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e"
	I1201 22:10:50.852378  691475 cri.go:89] found id: "7f474f99bf26bbeb5fd1570a1b4d07a037ededb0976f97b7cfaf0397b2fad3c9"
	I1201 22:10:50.852389  691475 cri.go:89] found id: ""
	I1201 22:10:50.852464  691475 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 22:10:50.867970  691475 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T22:10:50Z" level=error msg="open /run/runc: no such file or directory"
	I1201 22:10:50.868134  691475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 22:10:50.880296  691475 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 22:10:50.880319  691475 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 22:10:50.880378  691475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 22:10:50.888978  691475 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 22:10:50.889677  691475 kubeconfig.go:125] found "pause-188533" server: "https://192.168.85.2:8443"
	I1201 22:10:50.890478  691475 kapi.go:59] client config for pause-188533: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 22:10:50.890985  691475 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 22:10:50.891006  691475 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 22:10:50.891012  691475 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 22:10:50.891016  691475 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 22:10:50.891020  691475 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 22:10:50.893607  691475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 22:10:50.914765  691475 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1201 22:10:50.914800  691475 kubeadm.go:602] duration metric: took 34.474957ms to restartPrimaryControlPlane
	I1201 22:10:50.914812  691475 kubeadm.go:403] duration metric: took 91.879273ms to StartCluster
	I1201 22:10:50.914828  691475 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:10:50.914896  691475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 22:10:50.915791  691475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:10:50.916011  691475 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 22:10:50.916370  691475 config.go:182] Loaded profile config "pause-188533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 22:10:50.916416  691475 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 22:10:50.919473  691475 out.go:179] * Verifying Kubernetes components...
	I1201 22:10:50.919559  691475 out.go:179] * Enabled addons: 
	I1201 22:10:50.922419  691475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:10:50.922541  691475 addons.go:530] duration metric: took 6.126817ms for enable addons: enabled=[]
	I1201 22:10:51.113777  691475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 22:10:51.139063  691475 node_ready.go:35] waiting up to 6m0s for node "pause-188533" to be "Ready" ...
	I1201 22:10:56.548598  691475 node_ready.go:49] node "pause-188533" is "Ready"
	I1201 22:10:56.548633  691475 node_ready.go:38] duration metric: took 5.40953418s for node "pause-188533" to be "Ready" ...
	I1201 22:10:56.548647  691475 api_server.go:52] waiting for apiserver process to appear ...
	I1201 22:10:56.548711  691475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:10:56.573773  691475 api_server.go:72] duration metric: took 5.657728382s to wait for apiserver process to appear ...
	I1201 22:10:56.573802  691475 api_server.go:88] waiting for apiserver healthz status ...
	I1201 22:10:56.573823  691475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 22:10:56.640013  691475 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 22:10:56.640059  691475 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 22:10:57.074688  691475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 22:10:57.083469  691475 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 22:10:57.083508  691475 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 22:10:57.574626  691475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 22:10:57.582954  691475 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 22:10:57.582987  691475 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 22:10:58.074203  691475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 22:10:58.082778  691475 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1201 22:10:58.084268  691475 api_server.go:141] control plane version: v1.34.2
	I1201 22:10:58.084302  691475 api_server.go:131] duration metric: took 1.51048963s to wait for apiserver health ...
	I1201 22:10:58.084312  691475 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 22:10:58.089673  691475 system_pods.go:59] 7 kube-system pods found
	I1201 22:10:58.089728  691475 system_pods.go:61] "coredns-66bc5c9577-p9whp" [1ae3303f-38c2-4928-9594-2bdc7c75b9e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 22:10:58.089741  691475 system_pods.go:61] "etcd-pause-188533" [2748e5b3-1034-4540-b438-0f01701e4149] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 22:10:58.089783  691475 system_pods.go:61] "kindnet-cwlgd" [0f8efb97-b10e-49da-8be5-e8bf43a2f0b7] Running
	I1201 22:10:58.089791  691475 system_pods.go:61] "kube-apiserver-pause-188533" [317b03c2-af8b-4e84-a383-e784d7fa4e4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 22:10:58.089799  691475 system_pods.go:61] "kube-controller-manager-pause-188533" [ecacf85a-5970-424b-a9ed-792901dcc398] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 22:10:58.089804  691475 system_pods.go:61] "kube-proxy-pff7q" [8e72b6d8-a889-4a4e-89d7-ef091e9af0bb] Running
	I1201 22:10:58.089816  691475 system_pods.go:61] "kube-scheduler-pause-188533" [67c42eac-0489-48f2-a7a4-5da930af5192] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 22:10:58.089823  691475 system_pods.go:74] duration metric: took 5.504784ms to wait for pod list to return data ...
	I1201 22:10:58.089857  691475 default_sa.go:34] waiting for default service account to be created ...
	I1201 22:10:58.092721  691475 default_sa.go:45] found service account: "default"
	I1201 22:10:58.092750  691475 default_sa.go:55] duration metric: took 2.878904ms for default service account to be created ...
	I1201 22:10:58.092761  691475 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 22:10:58.097523  691475 system_pods.go:86] 7 kube-system pods found
	I1201 22:10:58.097564  691475 system_pods.go:89] "coredns-66bc5c9577-p9whp" [1ae3303f-38c2-4928-9594-2bdc7c75b9e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 22:10:58.097574  691475 system_pods.go:89] "etcd-pause-188533" [2748e5b3-1034-4540-b438-0f01701e4149] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 22:10:58.097590  691475 system_pods.go:89] "kindnet-cwlgd" [0f8efb97-b10e-49da-8be5-e8bf43a2f0b7] Running
	I1201 22:10:58.097596  691475 system_pods.go:89] "kube-apiserver-pause-188533" [317b03c2-af8b-4e84-a383-e784d7fa4e4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 22:10:58.097608  691475 system_pods.go:89] "kube-controller-manager-pause-188533" [ecacf85a-5970-424b-a9ed-792901dcc398] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 22:10:58.097615  691475 system_pods.go:89] "kube-proxy-pff7q" [8e72b6d8-a889-4a4e-89d7-ef091e9af0bb] Running
	I1201 22:10:58.097622  691475 system_pods.go:89] "kube-scheduler-pause-188533" [67c42eac-0489-48f2-a7a4-5da930af5192] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 22:10:58.097638  691475 system_pods.go:126] duration metric: took 4.869827ms to wait for k8s-apps to be running ...
	I1201 22:10:58.097654  691475 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 22:10:58.097714  691475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:10:58.114007  691475 system_svc.go:56] duration metric: took 16.343532ms WaitForService to wait for kubelet
	I1201 22:10:58.114039  691475 kubeadm.go:587] duration metric: took 7.19799703s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 22:10:58.114057  691475 node_conditions.go:102] verifying NodePressure condition ...
	I1201 22:10:58.118420  691475 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1201 22:10:58.118459  691475 node_conditions.go:123] node cpu capacity is 2
	I1201 22:10:58.118477  691475 node_conditions.go:105] duration metric: took 4.410777ms to run NodePressure ...
	I1201 22:10:58.118489  691475 start.go:242] waiting for startup goroutines ...
	I1201 22:10:58.118497  691475 start.go:247] waiting for cluster config update ...
	I1201 22:10:58.118509  691475 start.go:256] writing updated cluster config ...
	I1201 22:10:58.118836  691475 ssh_runner.go:195] Run: rm -f paused
	I1201 22:10:58.122808  691475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 22:10:58.123667  691475 kapi.go:59] client config for pause-188533: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 22:10:58.189396  691475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p9whp" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 22:11:00.225544  691475 pod_ready.go:104] pod "coredns-66bc5c9577-p9whp" is not "Ready", error: <nil>
	W1201 22:11:02.695502  691475 pod_ready.go:104] pod "coredns-66bc5c9577-p9whp" is not "Ready", error: <nil>
	I1201 22:11:03.198319  691475 pod_ready.go:94] pod "coredns-66bc5c9577-p9whp" is "Ready"
	I1201 22:11:03.198348  691475 pod_ready.go:86] duration metric: took 5.008872856s for pod "coredns-66bc5c9577-p9whp" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:03.205406  691475 pod_ready.go:83] waiting for pod "etcd-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:03.210423  691475 pod_ready.go:94] pod "etcd-pause-188533" is "Ready"
	I1201 22:11:03.210457  691475 pod_ready.go:86] duration metric: took 5.023522ms for pod "etcd-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:03.213042  691475 pod_ready.go:83] waiting for pod "kube-apiserver-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 22:11:05.219961  691475 pod_ready.go:104] pod "kube-apiserver-pause-188533" is not "Ready", error: <nil>
	I1201 22:11:05.719120  691475 pod_ready.go:94] pod "kube-apiserver-pause-188533" is "Ready"
	I1201 22:11:05.719178  691475 pod_ready.go:86] duration metric: took 2.50606981s for pod "kube-apiserver-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:05.721411  691475 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.227731  691475 pod_ready.go:94] pod "kube-controller-manager-pause-188533" is "Ready"
	I1201 22:11:06.227761  691475 pod_ready.go:86] duration metric: took 506.317639ms for pod "kube-controller-manager-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.230156  691475 pod_ready.go:83] waiting for pod "kube-proxy-pff7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.393696  691475 pod_ready.go:94] pod "kube-proxy-pff7q" is "Ready"
	I1201 22:11:06.393727  691475 pod_ready.go:86] duration metric: took 163.545352ms for pod "kube-proxy-pff7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.595035  691475 pod_ready.go:83] waiting for pod "kube-scheduler-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.993581  691475 pod_ready.go:94] pod "kube-scheduler-pause-188533" is "Ready"
	I1201 22:11:06.993611  691475 pod_ready.go:86] duration metric: took 398.549297ms for pod "kube-scheduler-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.993631  691475 pod_ready.go:40] duration metric: took 8.870782498s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 22:11:07.056241  691475 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1201 22:11:07.059407  691475 out.go:179] * Done! kubectl is now configured to use "pause-188533" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.252571681Z" level=info msg="Created container 23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660: kube-system/kube-apiserver-pause-188533/kube-apiserver" id=cf88c3c3-e100-4f14-a6a9-7b285f39f5c3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.253824216Z" level=info msg="Starting container: 23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660" id=0eff0920-c01e-485a-a103-d2384df39a97 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.257522262Z" level=info msg="Created container 34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b: kube-system/kube-controller-manager-pause-188533/kube-controller-manager" id=5bc60c1c-e772-4c25-b89a-75eeba034388 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.25791244Z" level=info msg="Started container" PID=2372 containerID=dd06372589408364b8b58065de97cbe55de7a24dfcbd37bfc9b061320d5e4539 description=kube-system/kube-scheduler-pause-188533/kube-scheduler id=7831e21c-a38b-491b-80fa-4fa7b19acf1b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb4c908114e901512f3c6c27a1601227dbbff1a1c6c02f329476a2d9974e1398
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.265013038Z" level=info msg="Starting container: 34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b" id=e3a2652f-c867-490b-9ed9-818654ae7081 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.266904656Z" level=info msg="Started container" PID=2381 containerID=23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660 description=kube-system/kube-apiserver-pause-188533/kube-apiserver id=0eff0920-c01e-485a-a103-d2384df39a97 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f709d63bcefd4fefdf81d641dd5e6425d6058a592ec35bfb5e959368ef278a8
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.275700618Z" level=info msg="Started container" PID=2385 containerID=34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b description=kube-system/kube-controller-manager-pause-188533/kube-controller-manager id=e3a2652f-c867-490b-9ed9-818654ae7081 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c1f63c7a59179a680d67c304e6e736cad04acec3e95d608674503dd1569f9c1
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.284600012Z" level=info msg="Created container dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723: kube-system/etcd-pause-188533/etcd" id=ee4b3dde-8344-462b-b47b-de57f8f3a4de name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.285223006Z" level=info msg="Starting container: dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723" id=c8e6e610-79a9-47eb-85c3-7409fb2a2fe8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.288386679Z" level=info msg="Started container" PID=2384 containerID=dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723 description=kube-system/etcd-pause-188533/etcd id=c8e6e610-79a9-47eb-85c3-7409fb2a2fe8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0090281d9f4b29b0309ae9c162ea6666be79bdf23f971bdcb31d212deb915181
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.506886969Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.510854408Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.510897213Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.510924273Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.514664475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.514703974Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.514729869Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.519093811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.519246407Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.519274665Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.522665263Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.522700954Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.522733823Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.526223807Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.526269837Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	34ef219e4fc6b       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   19 seconds ago       Running             kube-controller-manager   1                   6c1f63c7a5917       kube-controller-manager-pause-188533   kube-system
	dd13133241448       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   19 seconds ago       Running             etcd                      1                   0090281d9f4b2       etcd-pause-188533                      kube-system
	23eb3c5f241ec       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   19 seconds ago       Running             kube-apiserver            1                   7f709d63bcefd       kube-apiserver-pause-188533            kube-system
	dd06372589408       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   19 seconds ago       Running             kube-scheduler            1                   bb4c908114e90       kube-scheduler-pause-188533            kube-system
	bebec9756e9fc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   19 seconds ago       Running             coredns                   1                   7db5a7024eebe       coredns-66bc5c9577-p9whp               kube-system
	10a70a6f5dbc1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   19 seconds ago       Running             kindnet-cni               1                   d017a8bf05218       kindnet-cwlgd                          kube-system
	125f9c98221f0       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   19 seconds ago       Running             kube-proxy                1                   2a86b772ebf52       kube-proxy-pff7q                       kube-system
	81d2cfcf9894f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   7db5a7024eebe       coredns-66bc5c9577-p9whp               kube-system
	f517468165e61       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   2a86b772ebf52       kube-proxy-pff7q                       kube-system
	55f3da13e226d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   d017a8bf05218       kindnet-cwlgd                          kube-system
	ee45ee63674dd       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   6c1f63c7a5917       kube-controller-manager-pause-188533   kube-system
	829d829e6c58b       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   bb4c908114e90       kube-scheduler-pause-188533            kube-system
	9d0eee6eaad1b       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   7f709d63bcefd       kube-apiserver-pause-188533            kube-system
	7f474f99bf26b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   0090281d9f4b2       etcd-pause-188533                      kube-system
	
	
	==> coredns [81d2cfcf9894f9d0a557d8c60957ff9e303266d6d73df814a3f9fa53142a11f0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57040 - 32835 "HINFO IN 261061319210568351.4120552146690658369. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015904699s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bebec9756e9fcbf5790b7011256eaa67c0c522c378104d3e1ef3cdef4e894373] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38507 - 41417 "HINFO IN 2060688919814901410.6700011450987518916. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036692673s
	
	
	==> describe nodes <==
	Name:               pause-188533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-188533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=pause-188533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T22_09_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 22:09:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-188533
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 22:11:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 22:11:02 +0000   Mon, 01 Dec 2025 22:09:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 22:11:02 +0000   Mon, 01 Dec 2025 22:09:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 22:11:02 +0000   Mon, 01 Dec 2025 22:09:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 22:11:02 +0000   Mon, 01 Dec 2025 22:10:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-188533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                6d0ec033-c4d5-41c6-890c-9daf4739fc8c
	  Boot ID:                    06dea43b-2aa1-4726-8bb8-0a198189349a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p9whp                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-188533                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-cwlgd                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-pause-188533             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-188533    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-pff7q                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-pause-188533             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node pause-188533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node pause-188533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node pause-188533 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 80s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node pause-188533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node pause-188533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node pause-188533 status is now: NodeHasSufficientPID
	  Normal   Starting                 80s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           75s                node-controller  Node pause-188533 event: Registered Node pause-188533 in Controller
	  Normal   NodeReady                33s                kubelet          Node pause-188533 status is now: NodeReady
	  Normal   RegisteredNode           11s                node-controller  Node pause-188533 event: Registered Node pause-188533 in Controller
	
	
	==> dmesg <==
	[ +32.789765] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:39] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:40] overlayfs: idmapped layers are currently not supported
	[  +3.421799] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:41] overlayfs: idmapped layers are currently not supported
	[ +28.971373] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:43] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:48] overlayfs: idmapped layers are currently not supported
	[ +29.317685] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:50] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:51] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:52] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:53] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:54] overlayfs: idmapped layers are currently not supported
	[  +2.710821] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:55] overlayfs: idmapped layers are currently not supported
	[ +23.922036] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:56] overlayfs: idmapped layers are currently not supported
	[ +26.428517] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:58] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:59] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:01] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:02] overlayfs: idmapped layers are currently not supported
	[ +24.384212] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7f474f99bf26bbeb5fd1570a1b4d07a037ededb0976f97b7cfaf0397b2fad3c9] <==
	{"level":"warn","ts":"2025-12-01T22:09:47.190467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.224941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.248586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.279674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.288830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.314253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.368479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40694","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-01T22:10:42.629037Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-01T22:10:42.629104Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-188533","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-01T22:10:42.629236Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T22:10:42.804631Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T22:10:42.804726Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T22:10:42.804748Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-01T22:10:42.804804Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-01T22:10:42.804866Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T22:10:42.804901Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-01T22:10:42.804909Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T22:10:42.804926Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-01T22:10:42.805043Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T22:10:42.805113Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-01T22:10:42.805122Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T22:10:42.808801Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-01T22:10:42.808963Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T22:10:42.809014Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-01T22:10:42.809025Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-188533","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723] <==
	{"level":"warn","ts":"2025-12-01T22:10:54.696784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.732996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.780118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.792815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.830260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.837708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.859367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.883441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.944803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.993003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.046231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.060892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.124680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.131380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.145249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.198244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.215647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.260303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.288028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.311774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.359354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.392470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.409291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.427722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.484914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39782","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:11:10 up  3:53,  0 user,  load average: 1.50, 1.98, 2.00
	Linux pause-188533 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [10a70a6f5dbc1cf1cdce344789e47ea80729baa80aa552017e27ca47b2227324] <==
	I1201 22:10:51.300153       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 22:10:51.300505       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1201 22:10:51.300666       1 main.go:148] setting mtu 1500 for CNI 
	I1201 22:10:51.300708       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 22:10:51.300751       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T22:10:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 22:10:51.506048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 22:10:51.506147       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 22:10:51.506189       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 22:10:51.507215       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 22:10:56.706860       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 22:10:56.706895       1 metrics.go:72] Registering metrics
	I1201 22:10:56.706952       1 controller.go:711] "Syncing nftables rules"
	I1201 22:11:01.506454       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 22:11:01.506510       1 main.go:301] handling current node
	
	
	==> kindnet [55f3da13e226d8e308b654b59a220e4446138e3b1a7e31583e355636f80f4a1e] <==
	I1201 22:09:57.102229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 22:09:57.102528       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1201 22:09:57.102717       1 main.go:148] setting mtu 1500 for CNI 
	I1201 22:09:57.102739       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 22:09:57.102759       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T22:09:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 22:09:57.302683       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 22:09:57.302761       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 22:09:57.302811       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 22:09:57.303193       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1201 22:10:27.303214       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1201 22:10:27.303226       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1201 22:10:27.303350       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1201 22:10:27.304624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1201 22:10:28.603001       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 22:10:28.603031       1 metrics.go:72] Registering metrics
	I1201 22:10:28.603105       1 controller.go:711] "Syncing nftables rules"
	I1201 22:10:37.308309       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 22:10:37.308357       1 main.go:301] handling current node
	
	
	==> kube-apiserver [23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660] <==
	I1201 22:10:56.610396       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 22:10:56.610434       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1201 22:10:56.610646       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1201 22:10:56.610813       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1201 22:10:56.610877       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1201 22:10:56.611117       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 22:10:56.611419       1 aggregator.go:171] initial CRD sync complete...
	I1201 22:10:56.611438       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 22:10:56.611445       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 22:10:56.611452       1 cache.go:39] Caches are synced for autoregister controller
	I1201 22:10:56.611707       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1201 22:10:56.615766       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1201 22:10:56.615800       1 policy_source.go:240] refreshing policies
	I1201 22:10:56.628039       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1201 22:10:56.635536       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1201 22:10:56.641454       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 22:10:56.641582       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 22:10:56.647595       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1201 22:10:56.685216       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 22:10:57.315300       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 22:10:57.728122       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 22:10:59.217409       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 22:10:59.266499       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 22:10:59.414594       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 22:10:59.467175       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e] <==
	W1201 22:10:42.649383       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649445       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649457       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649508       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649522       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649695       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649796       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649890       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.650958       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.650997       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651030       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651061       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651091       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651123       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651356       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651463       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651496       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651673       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651744       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651778       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651810       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651838       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651869       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.652899       1 logging.go:55] [core] [Channel #18 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.652965       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b] <==
	I1201 22:10:59.091848       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1201 22:10:59.091931       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-188533"
	I1201 22:10:59.091981       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1201 22:10:59.096367       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1201 22:10:59.100621       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1201 22:10:59.101774       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1201 22:10:59.104977       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1201 22:10:59.107364       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 22:10:59.107381       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1201 22:10:59.108595       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 22:10:59.108620       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1201 22:10:59.108895       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1201 22:10:59.108969       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 22:10:59.109489       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1201 22:10:59.111085       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1201 22:10:59.111261       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 22:10:59.111675       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 22:10:59.112839       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1201 22:10:59.114775       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1201 22:10:59.117814       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1201 22:10:59.117868       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 22:10:59.118011       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 22:10:59.118020       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 22:10:59.121391       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1201 22:10:59.127075       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810] <==
	I1201 22:09:55.098761       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1201 22:09:55.098832       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1201 22:09:55.098936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-188533"
	I1201 22:09:55.099014       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1201 22:09:55.099075       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 22:09:55.099535       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 22:09:55.101586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1201 22:09:55.102215       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1201 22:09:55.102541       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1201 22:09:55.102579       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1201 22:09:55.102856       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1201 22:09:55.102981       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1201 22:09:55.103083       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1201 22:09:55.103209       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1201 22:09:55.103291       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1201 22:09:55.103391       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 22:09:55.103451       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 22:09:55.103513       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 22:09:55.103546       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 22:09:55.103579       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 22:09:55.109560       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1201 22:09:55.110991       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 22:09:55.117950       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1201 22:09:55.124284       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-188533" podCIDRs=["10.244.0.0/24"]
	I1201 22:10:40.337282       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [125f9c98221f08aefc5c1d5767531793e6623aafdbbe2788da3f53d0b37c5b5f] <==
	I1201 22:10:51.222407       1 server_linux.go:53] "Using iptables proxy"
	I1201 22:10:52.696724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 22:10:56.743281       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 22:10:56.748363       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1201 22:10:56.748459       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 22:10:56.804625       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 22:10:56.804685       1 server_linux.go:132] "Using iptables Proxier"
	I1201 22:10:56.821901       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 22:10:56.822266       1 server.go:527] "Version info" version="v1.34.2"
	I1201 22:10:56.822323       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 22:10:56.832945       1 config.go:200] "Starting service config controller"
	I1201 22:10:56.833035       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 22:10:56.833080       1 config.go:106] "Starting endpoint slice config controller"
	I1201 22:10:56.833111       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 22:10:56.833150       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 22:10:56.833179       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 22:10:56.833916       1 config.go:309] "Starting node config controller"
	I1201 22:10:56.833989       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 22:10:56.834026       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 22:10:56.934151       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 22:10:56.934310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 22:10:56.934311       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f517468165e6149e74b6caf291fa53acbeda3290845551238b0ba8999a831f3a] <==
	I1201 22:09:57.270982       1 server_linux.go:53] "Using iptables proxy"
	I1201 22:09:57.352710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 22:09:57.453298       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 22:09:57.453428       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1201 22:09:57.453523       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 22:09:57.472836       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 22:09:57.472917       1 server_linux.go:132] "Using iptables Proxier"
	I1201 22:09:57.478564       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 22:09:57.478971       1 server.go:527] "Version info" version="v1.34.2"
	I1201 22:09:57.479002       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 22:09:57.484344       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 22:09:57.484437       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 22:09:57.484850       1 config.go:200] "Starting service config controller"
	I1201 22:09:57.484905       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 22:09:57.484960       1 config.go:106] "Starting endpoint slice config controller"
	I1201 22:09:57.484993       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 22:09:57.486636       1 config.go:309] "Starting node config controller"
	I1201 22:09:57.486661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 22:09:57.486670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 22:09:57.584661       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 22:09:57.585924       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 22:09:57.585994       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [829d829e6c58b92ed7c38ab13e650eb88fd6a1b9d086634b309b04cd79ffb2eb] <==
	E1201 22:09:48.129006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 22:09:48.129064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 22:09:48.129119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 22:09:48.129132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 22:09:48.129183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 22:09:48.129221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 22:09:48.129282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 22:09:48.129339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 22:09:48.129403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 22:09:48.129529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 22:09:48.129595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 22:09:48.956498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 22:09:48.957496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 22:09:49.015419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 22:09:49.040095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 22:09:49.072274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1201 22:09:49.136444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 22:09:49.229753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 22:09:49.276870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1201 22:09:51.802770       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 22:10:42.631838       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1201 22:10:42.632819       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1201 22:10:42.632838       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1201 22:10:42.632988       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1201 22:10:42.633004       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dd06372589408364b8b58065de97cbe55de7a24dfcbd37bfc9b061320d5e4539] <==
	I1201 22:10:53.439436       1 serving.go:386] Generated self-signed cert in-memory
	W1201 22:10:56.475614       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 22:10:56.475720       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 22:10:56.475756       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 22:10:56.475786       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 22:10:56.586473       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1201 22:10:56.586577       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 22:10:56.595842       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 22:10:56.596074       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 22:10:56.607252       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 22:10:56.596093       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 22:10:56.709746       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.092781    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="61f9f8cc8d13080efe9b6905f3567ed5" pod="kube-system/kube-apiserver-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.093095    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="31c592842db5e39ccc09d331cc027c0e" pod="kube-system/etcd-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: I1201 22:10:51.098557    1317 scope.go:117] "RemoveContainer" containerID="9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.099321    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cwlgd\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0f8efb97-b10e-49da-8be5-e8bf43a2f0b7" pod="kube-system/kindnet-cwlgd"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.099674    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p9whp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1ae3303f-38c2-4928-9594-2bdc7c75b9e0" pod="kube-system/coredns-66bc5c9577-p9whp"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.099998    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="61f9f8cc8d13080efe9b6905f3567ed5" pod="kube-system/kube-apiserver-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.100308    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="31c592842db5e39ccc09d331cc027c0e" pod="kube-system/etcd-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.100603    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4dfcd21f9a4adc8f94b19842c00db276" pod="kube-system/kube-scheduler-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.100903    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pff7q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8e72b6d8-a889-4a4e-89d7-ef091e9af0bb" pod="kube-system/kube-proxy-pff7q"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: I1201 22:10:51.125630    1317 scope.go:117] "RemoveContainer" containerID="ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.126431    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="31c592842db5e39ccc09d331cc027c0e" pod="kube-system/etcd-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.126665    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bffb4fb097a12663dbdef28deabbe665" pod="kube-system/kube-controller-manager-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.127109    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4dfcd21f9a4adc8f94b19842c00db276" pod="kube-system/kube-scheduler-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.127655    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pff7q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8e72b6d8-a889-4a4e-89d7-ef091e9af0bb" pod="kube-system/kube-proxy-pff7q"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.128028    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cwlgd\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0f8efb97-b10e-49da-8be5-e8bf43a2f0b7" pod="kube-system/kindnet-cwlgd"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.128363    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p9whp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1ae3303f-38c2-4928-9594-2bdc7c75b9e0" pod="kube-system/coredns-66bc5c9577-p9whp"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.129323    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="61f9f8cc8d13080efe9b6905f3567ed5" pod="kube-system/kube-apiserver-pause-188533"
	Dec 01 22:10:56 pause-188533 kubelet[1317]: E1201 22:10:56.460307    1317 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-188533\" is forbidden: User \"system:node:pause-188533\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-188533' and this object" podUID="bffb4fb097a12663dbdef28deabbe665" pod="kube-system/kube-controller-manager-pause-188533"
	Dec 01 22:10:56 pause-188533 kubelet[1317]: E1201 22:10:56.460489    1317 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-188533\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-188533' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 01 22:10:56 pause-188533 kubelet[1317]: E1201 22:10:56.460614    1317 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-188533\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-188533' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 01 22:10:56 pause-188533 kubelet[1317]: E1201 22:10:56.529657    1317 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-188533\" is forbidden: User \"system:node:pause-188533\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-188533' and this object" podUID="4dfcd21f9a4adc8f94b19842c00db276" pod="kube-system/kube-scheduler-pause-188533"
	Dec 01 22:11:01 pause-188533 kubelet[1317]: W1201 22:11:01.050183    1317 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 01 22:11:07 pause-188533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 22:11:07 pause-188533 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 22:11:07 pause-188533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-188533 -n pause-188533
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-188533 -n pause-188533: exit status 2 (524.694422ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-188533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-188533
helpers_test.go:243: (dbg) docker inspect pause-188533:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15",
	        "Created": "2025-12-01T22:09:22.266784657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 688889,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T22:09:22.33326856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15/hostname",
	        "HostsPath": "/var/lib/docker/containers/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15/hosts",
	        "LogPath": "/var/lib/docker/containers/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15/56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15-json.log",
	        "Name": "/pause-188533",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-188533:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-188533",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "56bed8e2da48713a38d2da87977f667743a96c567f41b8251563801866033a15",
	                "LowerDir": "/var/lib/docker/overlay2/da6d7a6f99a949509f6893d10c827324af7dde87ebdf51a6a5e3471d94cd312c-init/diff:/var/lib/docker/overlay2/f0ba49b44048d740697b37803f992c2f7a99e21ce77995ff128ceffc01329aa1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da6d7a6f99a949509f6893d10c827324af7dde87ebdf51a6a5e3471d94cd312c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da6d7a6f99a949509f6893d10c827324af7dde87ebdf51a6a5e3471d94cd312c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da6d7a6f99a949509f6893d10c827324af7dde87ebdf51a6a5e3471d94cd312c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-188533",
	                "Source": "/var/lib/docker/volumes/pause-188533/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-188533",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-188533",
	                "name.minikube.sigs.k8s.io": "pause-188533",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa85ef5b998c2ec06e1d027e56c8c8421693aca39f30c12d9128156dcc286711",
	            "SandboxKey": "/var/run/docker/netns/aa85ef5b998c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-188533": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:2a:b2:9d:e4:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8eb773d1b4921a3f23af0eb8ba4d602ded05f859cfc5259a39cee841753a40e2",
	                    "EndpointID": "834500a0d9f0f5caa17339584e46f145c9dbb5fc0c42a9b232c013658ff04eb0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-188533",
	                        "56bed8e2da48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-188533 -n pause-188533
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-188533 -n pause-188533: exit status 2 (460.873801ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-188533 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-188533 logs -n 25: (1.454775569s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-516822 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:00 UTC │ 01 Dec 25 22:01 UTC │
	│ start   │ -p missing-upgrade-152595 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-152595    │ jenkins │ v1.35.0 │ 01 Dec 25 22:00 UTC │ 01 Dec 25 22:02 UTC │
	│ start   │ -p NoKubernetes-516822 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ delete  │ -p NoKubernetes-516822                                                                                                                          │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ start   │ -p NoKubernetes-516822 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ ssh     │ -p NoKubernetes-516822 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │                     │
	│ stop    │ -p NoKubernetes-516822                                                                                                                          │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ start   │ -p NoKubernetes-516822 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ ssh     │ -p NoKubernetes-516822 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │                     │
	│ delete  │ -p NoKubernetes-516822                                                                                                                          │ NoKubernetes-516822       │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:01 UTC │
	│ start   │ -p kubernetes-upgrade-738753 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-738753 │ jenkins │ v1.37.0 │ 01 Dec 25 22:01 UTC │ 01 Dec 25 22:02 UTC │
	│ start   │ -p missing-upgrade-152595 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-152595    │ jenkins │ v1.37.0 │ 01 Dec 25 22:02 UTC │ 01 Dec 25 22:03 UTC │
	│ stop    │ -p kubernetes-upgrade-738753                                                                                                                    │ kubernetes-upgrade-738753 │ jenkins │ v1.37.0 │ 01 Dec 25 22:02 UTC │ 01 Dec 25 22:02 UTC │
	│ start   │ -p kubernetes-upgrade-738753 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-738753 │ jenkins │ v1.37.0 │ 01 Dec 25 22:02 UTC │                     │
	│ delete  │ -p missing-upgrade-152595                                                                                                                       │ missing-upgrade-152595    │ jenkins │ v1.37.0 │ 01 Dec 25 22:03 UTC │ 01 Dec 25 22:03 UTC │
	│ start   │ -p stopped-upgrade-952426 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-952426    │ jenkins │ v1.35.0 │ 01 Dec 25 22:03 UTC │ 01 Dec 25 22:03 UTC │
	│ stop    │ stopped-upgrade-952426 stop                                                                                                                     │ stopped-upgrade-952426    │ jenkins │ v1.35.0 │ 01 Dec 25 22:03 UTC │ 01 Dec 25 22:03 UTC │
	│ start   │ -p stopped-upgrade-952426 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-952426    │ jenkins │ v1.37.0 │ 01 Dec 25 22:03 UTC │ 01 Dec 25 22:08 UTC │
	│ delete  │ -p stopped-upgrade-952426                                                                                                                       │ stopped-upgrade-952426    │ jenkins │ v1.37.0 │ 01 Dec 25 22:08 UTC │ 01 Dec 25 22:08 UTC │
	│ start   │ -p running-upgrade-976949 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-976949    │ jenkins │ v1.35.0 │ 01 Dec 25 22:08 UTC │ 01 Dec 25 22:08 UTC │
	│ start   │ -p running-upgrade-976949 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-976949    │ jenkins │ v1.37.0 │ 01 Dec 25 22:08 UTC │ 01 Dec 25 22:09 UTC │
	│ delete  │ -p running-upgrade-976949                                                                                                                       │ running-upgrade-976949    │ jenkins │ v1.37.0 │ 01 Dec 25 22:09 UTC │ 01 Dec 25 22:09 UTC │
	│ start   │ -p pause-188533 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-188533              │ jenkins │ v1.37.0 │ 01 Dec 25 22:09 UTC │ 01 Dec 25 22:10 UTC │
	│ start   │ -p pause-188533 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-188533              │ jenkins │ v1.37.0 │ 01 Dec 25 22:10 UTC │ 01 Dec 25 22:11 UTC │
	│ pause   │ -p pause-188533 --alsologtostderr -v=5                                                                                                          │ pause-188533              │ jenkins │ v1.37.0 │ 01 Dec 25 22:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 22:10:40
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 22:10:40.783647  691475 out.go:360] Setting OutFile to fd 1 ...
	I1201 22:10:40.783853  691475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:10:40.783887  691475 out.go:374] Setting ErrFile to fd 2...
	I1201 22:10:40.783914  691475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 22:10:40.784312  691475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 22:10:40.784801  691475 out.go:368] Setting JSON to false
	I1201 22:10:40.786146  691475 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":13990,"bootTime":1764613051,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 22:10:40.786233  691475 start.go:143] virtualization:  
	I1201 22:10:40.789151  691475 out.go:179] * [pause-188533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 22:10:40.792991  691475 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 22:10:40.793157  691475 notify.go:221] Checking for updates...
	I1201 22:10:40.798720  691475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 22:10:40.801724  691475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 22:10:40.804566  691475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 22:10:40.807368  691475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 22:10:40.810244  691475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 22:10:40.813718  691475 config.go:182] Loaded profile config "pause-188533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 22:10:40.814324  691475 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 22:10:40.853471  691475 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 22:10:40.853589  691475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 22:10:40.931193  691475 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-01 22:10:40.919786129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 22:10:40.931307  691475 docker.go:319] overlay module found
	I1201 22:10:40.934602  691475 out.go:179] * Using the docker driver based on existing profile
	I1201 22:10:40.937515  691475 start.go:309] selected driver: docker
	I1201 22:10:40.937541  691475 start.go:927] validating driver "docker" against &{Name:pause-188533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:10:40.937741  691475 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 22:10:40.937848  691475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 22:10:41.017388  691475 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-01 22:10:41.006274004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 22:10:41.017833  691475 cni.go:84] Creating CNI manager for ""
	I1201 22:10:41.017902  691475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 22:10:41.017953  691475 start.go:353] cluster config:
	{Name:pause-188533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:10:41.021213  691475 out.go:179] * Starting "pause-188533" primary control-plane node in "pause-188533" cluster
	I1201 22:10:41.024099  691475 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 22:10:41.027670  691475 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 22:10:41.031121  691475 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 22:10:41.031441  691475 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1201 22:10:41.031488  691475 cache.go:65] Caching tarball of preloaded images
	I1201 22:10:41.031331  691475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 22:10:41.031864  691475 preload.go:238] Found /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1201 22:10:41.031900  691475 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 22:10:41.032094  691475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/config.json ...
	I1201 22:10:41.054684  691475 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 22:10:41.054714  691475 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 22:10:41.054728  691475 cache.go:243] Successfully downloaded all kic artifacts
	I1201 22:10:41.054762  691475 start.go:360] acquireMachinesLock for pause-188533: {Name:mkffc0f334f79b444589375776f7dd5028fc5c89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 22:10:41.054826  691475 start.go:364] duration metric: took 38.498µs to acquireMachinesLock for "pause-188533"
	I1201 22:10:41.054848  691475 start.go:96] Skipping create...Using existing machine configuration
	I1201 22:10:41.054862  691475 fix.go:54] fixHost starting: 
	I1201 22:10:41.055125  691475 cli_runner.go:164] Run: docker container inspect pause-188533 --format={{.State.Status}}
	I1201 22:10:41.076041  691475 fix.go:112] recreateIfNeeded on pause-188533: state=Running err=<nil>
	W1201 22:10:41.076075  691475 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 22:10:41.079360  691475 out.go:252] * Updating the running docker "pause-188533" container ...
	I1201 22:10:41.079399  691475 machine.go:94] provisionDockerMachine start ...
	I1201 22:10:41.079502  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:41.097969  691475 main.go:143] libmachine: Using SSH client type: native
	I1201 22:10:41.098341  691475 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1201 22:10:41.098356  691475 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 22:10:41.255942  691475 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-188533
	
	I1201 22:10:41.256065  691475 ubuntu.go:182] provisioning hostname "pause-188533"
	I1201 22:10:41.256170  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:41.274900  691475 main.go:143] libmachine: Using SSH client type: native
	I1201 22:10:41.275305  691475 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1201 22:10:41.275322  691475 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-188533 && echo "pause-188533" | sudo tee /etc/hostname
	I1201 22:10:41.438814  691475 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-188533
	
	I1201 22:10:41.438908  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:41.465705  691475 main.go:143] libmachine: Using SSH client type: native
	I1201 22:10:41.466039  691475 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1201 22:10:41.466075  691475 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-188533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-188533/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-188533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 22:10:41.616151  691475 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 22:10:41.616178  691475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-482752/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-482752/.minikube}
	I1201 22:10:41.616223  691475 ubuntu.go:190] setting up certificates
	I1201 22:10:41.616234  691475 provision.go:84] configureAuth start
	I1201 22:10:41.616310  691475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-188533
	I1201 22:10:41.636985  691475 provision.go:143] copyHostCerts
	I1201 22:10:41.637066  691475 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem, removing ...
	I1201 22:10:41.637085  691475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem
	I1201 22:10:41.637168  691475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/cert.pem (1123 bytes)
	I1201 22:10:41.637283  691475 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem, removing ...
	I1201 22:10:41.637289  691475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem
	I1201 22:10:41.637317  691475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/key.pem (1675 bytes)
	I1201 22:10:41.637381  691475 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem, removing ...
	I1201 22:10:41.637385  691475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem
	I1201 22:10:41.637410  691475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-482752/.minikube/ca.pem (1082 bytes)
	I1201 22:10:41.637469  691475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem org=jenkins.pause-188533 san=[127.0.0.1 192.168.85.2 localhost minikube pause-188533]
	I1201 22:10:42.228698  691475 provision.go:177] copyRemoteCerts
	I1201 22:10:42.228792  691475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 22:10:42.228840  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:42.264597  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:42.376442  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 22:10:42.396674  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 22:10:42.415478  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1201 22:10:42.434458  691475 provision.go:87] duration metric: took 818.199534ms to configureAuth
	I1201 22:10:42.434532  691475 ubuntu.go:206] setting minikube options for container-runtime
	I1201 22:10:42.434765  691475 config.go:182] Loaded profile config "pause-188533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 22:10:42.434876  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:42.454395  691475 main.go:143] libmachine: Using SSH client type: native
	I1201 22:10:42.454705  691475 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1201 22:10:42.454719  691475 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 22:10:47.861712  691475 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 22:10:47.861736  691475 machine.go:97] duration metric: took 6.782327254s to provisionDockerMachine
	I1201 22:10:47.861749  691475 start.go:293] postStartSetup for "pause-188533" (driver="docker")
	I1201 22:10:47.861761  691475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 22:10:47.861828  691475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 22:10:47.861899  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:47.881097  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:47.991841  691475 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 22:10:47.995677  691475 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 22:10:47.995709  691475 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 22:10:47.995721  691475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/addons for local assets ...
	I1201 22:10:47.995778  691475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-482752/.minikube/files for local assets ...
	I1201 22:10:47.995865  691475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem -> 4860022.pem in /etc/ssl/certs
	I1201 22:10:47.995987  691475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 22:10:48.004239  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 22:10:48.033891  691475 start.go:296] duration metric: took 172.124566ms for postStartSetup
	I1201 22:10:48.033990  691475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 22:10:48.034056  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:48.053241  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:48.156897  691475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 22:10:48.162184  691475 fix.go:56] duration metric: took 7.107314893s for fixHost
	I1201 22:10:48.162211  691475 start.go:83] releasing machines lock for "pause-188533", held for 7.107374888s
	I1201 22:10:48.162285  691475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-188533
	I1201 22:10:48.180180  691475 ssh_runner.go:195] Run: cat /version.json
	I1201 22:10:48.180237  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:48.180491  691475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 22:10:48.180562  691475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188533
	I1201 22:10:48.201352  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:48.204692  691475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/pause-188533/id_rsa Username:docker}
	I1201 22:10:48.303590  691475 ssh_runner.go:195] Run: systemctl --version
	I1201 22:10:48.398682  691475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 22:10:48.441635  691475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 22:10:48.446212  691475 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 22:10:48.446328  691475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 22:10:48.454676  691475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 22:10:48.454699  691475 start.go:496] detecting cgroup driver to use...
	I1201 22:10:48.454732  691475 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1201 22:10:48.454783  691475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 22:10:48.471027  691475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 22:10:48.484876  691475 docker.go:218] disabling cri-docker service (if available) ...
	I1201 22:10:48.484949  691475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 22:10:48.502158  691475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 22:10:48.516490  691475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 22:10:48.649818  691475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 22:10:48.816954  691475 docker.go:234] disabling docker service ...
	I1201 22:10:48.817041  691475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 22:10:48.833391  691475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 22:10:48.847742  691475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 22:10:48.991410  691475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 22:10:49.142059  691475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 22:10:49.155999  691475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 22:10:49.171802  691475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 22:10:49.171868  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.181736  691475 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 22:10:49.181905  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.191849  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.201624  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.210975  691475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 22:10:49.219570  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.229546  691475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.238531  691475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 22:10:49.248271  691475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 22:10:49.256303  691475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 22:10:49.264254  691475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:10:49.403742  691475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 22:10:49.622057  691475 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 22:10:49.622126  691475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 22:10:49.626165  691475 start.go:564] Will wait 60s for crictl version
	I1201 22:10:49.626237  691475 ssh_runner.go:195] Run: which crictl
	I1201 22:10:49.630035  691475 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 22:10:49.655986  691475 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 22:10:49.656091  691475 ssh_runner.go:195] Run: crio --version
	I1201 22:10:49.686776  691475 ssh_runner.go:195] Run: crio --version
	I1201 22:10:49.721385  691475 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 22:10:49.724366  691475 cli_runner.go:164] Run: docker network inspect pause-188533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 22:10:49.741205  691475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1201 22:10:49.745652  691475 kubeadm.go:884] updating cluster {Name:pause-188533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 22:10:49.745820  691475 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 22:10:49.745897  691475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 22:10:49.783601  691475 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 22:10:49.783628  691475 crio.go:433] Images already preloaded, skipping extraction
	I1201 22:10:49.783696  691475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 22:10:49.810926  691475 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 22:10:49.810954  691475 cache_images.go:86] Images are preloaded, skipping loading
	I1201 22:10:49.810963  691475 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1201 22:10:49.811082  691475 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-188533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 22:10:49.811225  691475 ssh_runner.go:195] Run: crio config
	I1201 22:10:49.887737  691475 cni.go:84] Creating CNI manager for ""
	I1201 22:10:49.887765  691475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 22:10:49.887820  691475 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 22:10:49.887853  691475 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-188533 NodeName:pause-188533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 22:10:49.888008  691475 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-188533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 22:10:49.888099  691475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 22:10:49.896829  691475 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 22:10:49.896931  691475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 22:10:49.905075  691475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1201 22:10:49.918332  691475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 22:10:49.931806  691475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1201 22:10:49.944962  691475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1201 22:10:49.948737  691475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:10:50.089576  691475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 22:10:50.104603  691475 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533 for IP: 192.168.85.2
	I1201 22:10:50.104628  691475 certs.go:195] generating shared ca certs ...
	I1201 22:10:50.104645  691475 certs.go:227] acquiring lock for ca certs: {Name:mk0475ccdbd6f854bab22fd8dfb32cc1af021336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:10:50.104875  691475 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key
	I1201 22:10:50.104952  691475 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key
	I1201 22:10:50.104966  691475 certs.go:257] generating profile certs ...
	I1201 22:10:50.105075  691475 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.key
	I1201 22:10:50.105157  691475 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/apiserver.key.1495b8ca
	I1201 22:10:50.105209  691475 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/proxy-client.key
	I1201 22:10:50.105327  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem (1338 bytes)
	W1201 22:10:50.105373  691475 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002_empty.pem, impossibly tiny 0 bytes
	I1201 22:10:50.105387  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 22:10:50.105420  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/ca.pem (1082 bytes)
	I1201 22:10:50.105453  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/cert.pem (1123 bytes)
	I1201 22:10:50.105484  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/certs/key.pem (1675 bytes)
	I1201 22:10:50.105536  691475 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem (1708 bytes)
	I1201 22:10:50.106142  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 22:10:50.127296  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 22:10:50.147539  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 22:10:50.180070  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1201 22:10:50.206374  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 22:10:50.231357  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 22:10:50.250462  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 22:10:50.269841  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 22:10:50.288573  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/certs/486002.pem --> /usr/share/ca-certificates/486002.pem (1338 bytes)
	I1201 22:10:50.306806  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/ssl/certs/4860022.pem --> /usr/share/ca-certificates/4860022.pem (1708 bytes)
	I1201 22:10:50.326456  691475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 22:10:50.345367  691475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 22:10:50.359079  691475 ssh_runner.go:195] Run: openssl version
	I1201 22:10:50.365709  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486002.pem && ln -fs /usr/share/ca-certificates/486002.pem /etc/ssl/certs/486002.pem"
	I1201 22:10:50.380228  691475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486002.pem
	I1201 22:10:50.384198  691475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 20:58 /usr/share/ca-certificates/486002.pem
	I1201 22:10:50.384303  691475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486002.pem
	I1201 22:10:50.427347  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486002.pem /etc/ssl/certs/51391683.0"
	I1201 22:10:50.435554  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4860022.pem && ln -fs /usr/share/ca-certificates/4860022.pem /etc/ssl/certs/4860022.pem"
	I1201 22:10:50.444256  691475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4860022.pem
	I1201 22:10:50.448228  691475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 20:58 /usr/share/ca-certificates/4860022.pem
	I1201 22:10:50.448298  691475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4860022.pem
	I1201 22:10:50.490761  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4860022.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 22:10:50.499544  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 22:10:50.508732  691475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:10:50.513166  691475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 20:38 /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:10:50.513250  691475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 22:10:50.556332  691475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 22:10:50.565301  691475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 22:10:50.569307  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 22:10:50.611070  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 22:10:50.653588  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 22:10:50.695453  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 22:10:50.736984  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 22:10:50.778455  691475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 22:10:50.822944  691475 kubeadm.go:401] StartCluster: {Name:pause-188533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-188533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 22:10:50.823074  691475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 22:10:50.823206  691475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 22:10:50.852284  691475 cri.go:89] found id: "81d2cfcf9894f9d0a557d8c60957ff9e303266d6d73df814a3f9fa53142a11f0"
	I1201 22:10:50.852350  691475 cri.go:89] found id: "f517468165e6149e74b6caf291fa53acbeda3290845551238b0ba8999a831f3a"
	I1201 22:10:50.852362  691475 cri.go:89] found id: "55f3da13e226d8e308b654b59a220e4446138e3b1a7e31583e355636f80f4a1e"
	I1201 22:10:50.852367  691475 cri.go:89] found id: "ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810"
	I1201 22:10:50.852371  691475 cri.go:89] found id: "829d829e6c58b92ed7c38ab13e650eb88fd6a1b9d086634b309b04cd79ffb2eb"
	I1201 22:10:50.852374  691475 cri.go:89] found id: "9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e"
	I1201 22:10:50.852378  691475 cri.go:89] found id: "7f474f99bf26bbeb5fd1570a1b4d07a037ededb0976f97b7cfaf0397b2fad3c9"
	I1201 22:10:50.852389  691475 cri.go:89] found id: ""
	I1201 22:10:50.852464  691475 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 22:10:50.867970  691475 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T22:10:50Z" level=error msg="open /run/runc: no such file or directory"
	I1201 22:10:50.868134  691475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 22:10:50.880296  691475 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 22:10:50.880319  691475 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 22:10:50.880378  691475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 22:10:50.888978  691475 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 22:10:50.889677  691475 kubeconfig.go:125] found "pause-188533" server: "https://192.168.85.2:8443"
	I1201 22:10:50.890478  691475 kapi.go:59] client config for pause-188533: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 22:10:50.890985  691475 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 22:10:50.891006  691475 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 22:10:50.891012  691475 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 22:10:50.891016  691475 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 22:10:50.891020  691475 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 22:10:50.893607  691475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 22:10:50.914765  691475 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1201 22:10:50.914800  691475 kubeadm.go:602] duration metric: took 34.474957ms to restartPrimaryControlPlane
	I1201 22:10:50.914812  691475 kubeadm.go:403] duration metric: took 91.879273ms to StartCluster
	I1201 22:10:50.914828  691475 settings.go:142] acquiring lock: {Name:mk783c1fd28fb527bb837882511f132133dc86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:10:50.914896  691475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 22:10:50.915791  691475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-482752/kubeconfig: {Name:mk92cfd0553ba70a7f11610c1bc1b8b04b905ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 22:10:50.916011  691475 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 22:10:50.916370  691475 config.go:182] Loaded profile config "pause-188533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 22:10:50.916416  691475 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 22:10:50.919473  691475 out.go:179] * Verifying Kubernetes components...
	I1201 22:10:50.919559  691475 out.go:179] * Enabled addons: 
	I1201 22:10:50.922419  691475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 22:10:50.922541  691475 addons.go:530] duration metric: took 6.126817ms for enable addons: enabled=[]
	I1201 22:10:51.113777  691475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 22:10:51.139063  691475 node_ready.go:35] waiting up to 6m0s for node "pause-188533" to be "Ready" ...
	I1201 22:10:56.548598  691475 node_ready.go:49] node "pause-188533" is "Ready"
	I1201 22:10:56.548633  691475 node_ready.go:38] duration metric: took 5.40953418s for node "pause-188533" to be "Ready" ...
	I1201 22:10:56.548647  691475 api_server.go:52] waiting for apiserver process to appear ...
	I1201 22:10:56.548711  691475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 22:10:56.573773  691475 api_server.go:72] duration metric: took 5.657728382s to wait for apiserver process to appear ...
	I1201 22:10:56.573802  691475 api_server.go:88] waiting for apiserver healthz status ...
	I1201 22:10:56.573823  691475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 22:10:56.640013  691475 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 22:10:56.640059  691475 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 22:10:57.074688  691475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 22:10:57.083469  691475 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 22:10:57.083508  691475 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 22:10:57.574626  691475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 22:10:57.582954  691475 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 22:10:57.582987  691475 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 22:10:58.074203  691475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 22:10:58.082778  691475 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1201 22:10:58.084268  691475 api_server.go:141] control plane version: v1.34.2
	I1201 22:10:58.084302  691475 api_server.go:131] duration metric: took 1.51048963s to wait for apiserver health ...
	I1201 22:10:58.084312  691475 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 22:10:58.089673  691475 system_pods.go:59] 7 kube-system pods found
	I1201 22:10:58.089728  691475 system_pods.go:61] "coredns-66bc5c9577-p9whp" [1ae3303f-38c2-4928-9594-2bdc7c75b9e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 22:10:58.089741  691475 system_pods.go:61] "etcd-pause-188533" [2748e5b3-1034-4540-b438-0f01701e4149] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 22:10:58.089783  691475 system_pods.go:61] "kindnet-cwlgd" [0f8efb97-b10e-49da-8be5-e8bf43a2f0b7] Running
	I1201 22:10:58.089791  691475 system_pods.go:61] "kube-apiserver-pause-188533" [317b03c2-af8b-4e84-a383-e784d7fa4e4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 22:10:58.089799  691475 system_pods.go:61] "kube-controller-manager-pause-188533" [ecacf85a-5970-424b-a9ed-792901dcc398] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 22:10:58.089804  691475 system_pods.go:61] "kube-proxy-pff7q" [8e72b6d8-a889-4a4e-89d7-ef091e9af0bb] Running
	I1201 22:10:58.089816  691475 system_pods.go:61] "kube-scheduler-pause-188533" [67c42eac-0489-48f2-a7a4-5da930af5192] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 22:10:58.089823  691475 system_pods.go:74] duration metric: took 5.504784ms to wait for pod list to return data ...
	I1201 22:10:58.089857  691475 default_sa.go:34] waiting for default service account to be created ...
	I1201 22:10:58.092721  691475 default_sa.go:45] found service account: "default"
	I1201 22:10:58.092750  691475 default_sa.go:55] duration metric: took 2.878904ms for default service account to be created ...
	I1201 22:10:58.092761  691475 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 22:10:58.097523  691475 system_pods.go:86] 7 kube-system pods found
	I1201 22:10:58.097564  691475 system_pods.go:89] "coredns-66bc5c9577-p9whp" [1ae3303f-38c2-4928-9594-2bdc7c75b9e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 22:10:58.097574  691475 system_pods.go:89] "etcd-pause-188533" [2748e5b3-1034-4540-b438-0f01701e4149] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 22:10:58.097590  691475 system_pods.go:89] "kindnet-cwlgd" [0f8efb97-b10e-49da-8be5-e8bf43a2f0b7] Running
	I1201 22:10:58.097596  691475 system_pods.go:89] "kube-apiserver-pause-188533" [317b03c2-af8b-4e84-a383-e784d7fa4e4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 22:10:58.097608  691475 system_pods.go:89] "kube-controller-manager-pause-188533" [ecacf85a-5970-424b-a9ed-792901dcc398] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 22:10:58.097615  691475 system_pods.go:89] "kube-proxy-pff7q" [8e72b6d8-a889-4a4e-89d7-ef091e9af0bb] Running
	I1201 22:10:58.097622  691475 system_pods.go:89] "kube-scheduler-pause-188533" [67c42eac-0489-48f2-a7a4-5da930af5192] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 22:10:58.097638  691475 system_pods.go:126] duration metric: took 4.869827ms to wait for k8s-apps to be running ...
	I1201 22:10:58.097654  691475 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 22:10:58.097714  691475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:10:58.114007  691475 system_svc.go:56] duration metric: took 16.343532ms WaitForService to wait for kubelet
	I1201 22:10:58.114039  691475 kubeadm.go:587] duration metric: took 7.19799703s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 22:10:58.114057  691475 node_conditions.go:102] verifying NodePressure condition ...
	I1201 22:10:58.118420  691475 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1201 22:10:58.118459  691475 node_conditions.go:123] node cpu capacity is 2
	I1201 22:10:58.118477  691475 node_conditions.go:105] duration metric: took 4.410777ms to run NodePressure ...
	I1201 22:10:58.118489  691475 start.go:242] waiting for startup goroutines ...
	I1201 22:10:58.118497  691475 start.go:247] waiting for cluster config update ...
	I1201 22:10:58.118509  691475 start.go:256] writing updated cluster config ...
	I1201 22:10:58.118836  691475 ssh_runner.go:195] Run: rm -f paused
	I1201 22:10:58.122808  691475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 22:10:58.123667  691475 kapi.go:59] client config for pause-188533: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/profiles/pause-188533/client.key", CAFile:"/home/jenkins/minikube-integration/21997-482752/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 22:10:58.189396  691475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p9whp" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 22:11:00.225544  691475 pod_ready.go:104] pod "coredns-66bc5c9577-p9whp" is not "Ready", error: <nil>
	W1201 22:11:02.695502  691475 pod_ready.go:104] pod "coredns-66bc5c9577-p9whp" is not "Ready", error: <nil>
	I1201 22:11:03.198319  691475 pod_ready.go:94] pod "coredns-66bc5c9577-p9whp" is "Ready"
	I1201 22:11:03.198348  691475 pod_ready.go:86] duration metric: took 5.008872856s for pod "coredns-66bc5c9577-p9whp" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:03.205406  691475 pod_ready.go:83] waiting for pod "etcd-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:03.210423  691475 pod_ready.go:94] pod "etcd-pause-188533" is "Ready"
	I1201 22:11:03.210457  691475 pod_ready.go:86] duration metric: took 5.023522ms for pod "etcd-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:03.213042  691475 pod_ready.go:83] waiting for pod "kube-apiserver-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 22:11:05.219961  691475 pod_ready.go:104] pod "kube-apiserver-pause-188533" is not "Ready", error: <nil>
	I1201 22:11:05.719120  691475 pod_ready.go:94] pod "kube-apiserver-pause-188533" is "Ready"
	I1201 22:11:05.719178  691475 pod_ready.go:86] duration metric: took 2.50606981s for pod "kube-apiserver-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:05.721411  691475 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.227731  691475 pod_ready.go:94] pod "kube-controller-manager-pause-188533" is "Ready"
	I1201 22:11:06.227761  691475 pod_ready.go:86] duration metric: took 506.317639ms for pod "kube-controller-manager-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.230156  691475 pod_ready.go:83] waiting for pod "kube-proxy-pff7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.393696  691475 pod_ready.go:94] pod "kube-proxy-pff7q" is "Ready"
	I1201 22:11:06.393727  691475 pod_ready.go:86] duration metric: took 163.545352ms for pod "kube-proxy-pff7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.595035  691475 pod_ready.go:83] waiting for pod "kube-scheduler-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.993581  691475 pod_ready.go:94] pod "kube-scheduler-pause-188533" is "Ready"
	I1201 22:11:06.993611  691475 pod_ready.go:86] duration metric: took 398.549297ms for pod "kube-scheduler-pause-188533" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 22:11:06.993631  691475 pod_ready.go:40] duration metric: took 8.870782498s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 22:11:07.056241  691475 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1201 22:11:07.059407  691475 out.go:179] * Done! kubectl is now configured to use "pause-188533" cluster and "default" namespace by default
	I1201 22:11:10.316036  661844 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1201 22:11:10.316076  661844 kubeadm.go:319] 
	I1201 22:11:10.316145  661844 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1201 22:11:10.320449  661844 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 22:11:10.320516  661844 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 22:11:10.320610  661844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 22:11:10.320673  661844 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 22:11:10.320716  661844 kubeadm.go:319] OS: Linux
	I1201 22:11:10.320766  661844 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 22:11:10.320818  661844 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 22:11:10.320870  661844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 22:11:10.320923  661844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 22:11:10.320974  661844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 22:11:10.321027  661844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 22:11:10.321076  661844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 22:11:10.321128  661844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 22:11:10.321178  661844 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 22:11:10.321262  661844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 22:11:10.321360  661844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 22:11:10.321449  661844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 22:11:10.321512  661844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 22:11:10.324392  661844 out.go:252]   - Generating certificates and keys ...
	I1201 22:11:10.324484  661844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 22:11:10.324546  661844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 22:11:10.324619  661844 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 22:11:10.324676  661844 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 22:11:10.324742  661844 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 22:11:10.324793  661844 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 22:11:10.324853  661844 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 22:11:10.324911  661844 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 22:11:10.324981  661844 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 22:11:10.325050  661844 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 22:11:10.325086  661844 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 22:11:10.325138  661844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 22:11:10.325187  661844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 22:11:10.325241  661844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 22:11:10.325296  661844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 22:11:10.325367  661844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 22:11:10.325421  661844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 22:11:10.325502  661844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 22:11:10.325565  661844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 22:11:10.328773  661844 out.go:252]   - Booting up control plane ...
	I1201 22:11:10.328958  661844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 22:11:10.329097  661844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 22:11:10.329214  661844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 22:11:10.329357  661844 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 22:11:10.329469  661844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 22:11:10.329584  661844 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 22:11:10.329675  661844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 22:11:10.329717  661844 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 22:11:10.329861  661844 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 22:11:10.329981  661844 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 22:11:10.330052  661844 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000610913s
	I1201 22:11:10.330056  661844 kubeadm.go:319] 
	I1201 22:11:10.330117  661844 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1201 22:11:10.330152  661844 kubeadm.go:319] 	- The kubelet is not running
	I1201 22:11:10.330279  661844 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1201 22:11:10.330284  661844 kubeadm.go:319] 
	I1201 22:11:10.330397  661844 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1201 22:11:10.330431  661844 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1201 22:11:10.330468  661844 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1201 22:11:10.330580  661844 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000610913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1201 22:11:10.330659  661844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1201 22:11:10.331119  661844 kubeadm.go:319] 
	I1201 22:11:10.752785  661844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 22:11:10.769471  661844 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 22:11:10.769541  661844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 22:11:10.783355  661844 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 22:11:10.783376  661844 kubeadm.go:158] found existing configuration files:
	
	I1201 22:11:10.783429  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 22:11:10.793807  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 22:11:10.793874  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 22:11:10.803377  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 22:11:10.815256  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 22:11:10.815327  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 22:11:10.825831  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 22:11:10.836521  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 22:11:10.836589  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 22:11:10.845777  661844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 22:11:10.857065  661844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 22:11:10.857130  661844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 22:11:10.866303  661844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 22:11:10.928739  661844 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 22:11:10.929148  661844 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 22:11:11.034840  661844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 22:11:11.034916  661844 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1201 22:11:11.034955  661844 kubeadm.go:319] OS: Linux
	I1201 22:11:11.035001  661844 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 22:11:11.035051  661844 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1201 22:11:11.035099  661844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 22:11:11.035167  661844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 22:11:11.035229  661844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 22:11:11.035284  661844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 22:11:11.035330  661844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 22:11:11.035379  661844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 22:11:11.035425  661844 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1201 22:11:11.138265  661844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 22:11:11.138390  661844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 22:11:11.138483  661844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 22:11:11.161961  661844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 22:11:11.166168  661844 out.go:252]   - Generating certificates and keys ...
	I1201 22:11:11.166264  661844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 22:11:11.166354  661844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 22:11:11.166440  661844 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1201 22:11:11.166508  661844 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1201 22:11:11.166582  661844 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1201 22:11:11.166642  661844 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1201 22:11:11.166708  661844 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1201 22:11:11.166773  661844 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1201 22:11:11.166851  661844 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1201 22:11:11.167374  661844 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1201 22:11:11.167726  661844 kubeadm.go:319] [certs] Using the existing "sa" key
	I1201 22:11:11.167794  661844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 22:11:11.466210  661844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 22:11:11.596073  661844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 22:11:12.024697  661844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 22:11:12.396856  661844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 22:11:12.497800  661844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 22:11:12.499750  661844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 22:11:12.502357  661844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.252571681Z" level=info msg="Created container 23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660: kube-system/kube-apiserver-pause-188533/kube-apiserver" id=cf88c3c3-e100-4f14-a6a9-7b285f39f5c3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.253824216Z" level=info msg="Starting container: 23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660" id=0eff0920-c01e-485a-a103-d2384df39a97 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.257522262Z" level=info msg="Created container 34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b: kube-system/kube-controller-manager-pause-188533/kube-controller-manager" id=5bc60c1c-e772-4c25-b89a-75eeba034388 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.25791244Z" level=info msg="Started container" PID=2372 containerID=dd06372589408364b8b58065de97cbe55de7a24dfcbd37bfc9b061320d5e4539 description=kube-system/kube-scheduler-pause-188533/kube-scheduler id=7831e21c-a38b-491b-80fa-4fa7b19acf1b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb4c908114e901512f3c6c27a1601227dbbff1a1c6c02f329476a2d9974e1398
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.265013038Z" level=info msg="Starting container: 34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b" id=e3a2652f-c867-490b-9ed9-818654ae7081 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.266904656Z" level=info msg="Started container" PID=2381 containerID=23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660 description=kube-system/kube-apiserver-pause-188533/kube-apiserver id=0eff0920-c01e-485a-a103-d2384df39a97 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f709d63bcefd4fefdf81d641dd5e6425d6058a592ec35bfb5e959368ef278a8
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.275700618Z" level=info msg="Started container" PID=2385 containerID=34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b description=kube-system/kube-controller-manager-pause-188533/kube-controller-manager id=e3a2652f-c867-490b-9ed9-818654ae7081 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c1f63c7a59179a680d67c304e6e736cad04acec3e95d608674503dd1569f9c1
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.284600012Z" level=info msg="Created container dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723: kube-system/etcd-pause-188533/etcd" id=ee4b3dde-8344-462b-b47b-de57f8f3a4de name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.285223006Z" level=info msg="Starting container: dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723" id=c8e6e610-79a9-47eb-85c3-7409fb2a2fe8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 22:10:51 pause-188533 crio[2070]: time="2025-12-01T22:10:51.288386679Z" level=info msg="Started container" PID=2384 containerID=dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723 description=kube-system/etcd-pause-188533/etcd id=c8e6e610-79a9-47eb-85c3-7409fb2a2fe8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0090281d9f4b29b0309ae9c162ea6666be79bdf23f971bdcb31d212deb915181
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.506886969Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.510854408Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.510897213Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.510924273Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.514664475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.514703974Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.514729869Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.519093811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.519246407Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.519274665Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.522665263Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.522700954Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.522733823Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.526223807Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 22:11:01 pause-188533 crio[2070]: time="2025-12-01T22:11:01.526269837Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	34ef219e4fc6b       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   22 seconds ago       Running             kube-controller-manager   1                   6c1f63c7a5917       kube-controller-manager-pause-188533   kube-system
	dd13133241448       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   22 seconds ago       Running             etcd                      1                   0090281d9f4b2       etcd-pause-188533                      kube-system
	23eb3c5f241ec       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   22 seconds ago       Running             kube-apiserver            1                   7f709d63bcefd       kube-apiserver-pause-188533            kube-system
	dd06372589408       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   22 seconds ago       Running             kube-scheduler            1                   bb4c908114e90       kube-scheduler-pause-188533            kube-system
	bebec9756e9fc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   7db5a7024eebe       coredns-66bc5c9577-p9whp               kube-system
	10a70a6f5dbc1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   d017a8bf05218       kindnet-cwlgd                          kube-system
	125f9c98221f0       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   22 seconds ago       Running             kube-proxy                1                   2a86b772ebf52       kube-proxy-pff7q                       kube-system
	81d2cfcf9894f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   7db5a7024eebe       coredns-66bc5c9577-p9whp               kube-system
	f517468165e61       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   2a86b772ebf52       kube-proxy-pff7q                       kube-system
	55f3da13e226d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   d017a8bf05218       kindnet-cwlgd                          kube-system
	ee45ee63674dd       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   6c1f63c7a5917       kube-controller-manager-pause-188533   kube-system
	829d829e6c58b       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   bb4c908114e90       kube-scheduler-pause-188533            kube-system
	9d0eee6eaad1b       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   7f709d63bcefd       kube-apiserver-pause-188533            kube-system
	7f474f99bf26b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   0090281d9f4b2       etcd-pause-188533                      kube-system
	
	
	==> coredns [81d2cfcf9894f9d0a557d8c60957ff9e303266d6d73df814a3f9fa53142a11f0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57040 - 32835 "HINFO IN 261061319210568351.4120552146690658369. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015904699s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bebec9756e9fcbf5790b7011256eaa67c0c522c378104d3e1ef3cdef4e894373] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38507 - 41417 "HINFO IN 2060688919814901410.6700011450987518916. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036692673s
	
	
	==> describe nodes <==
	Name:               pause-188533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-188533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=pause-188533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T22_09_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 22:09:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-188533
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 22:11:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 22:11:02 +0000   Mon, 01 Dec 2025 22:09:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 22:11:02 +0000   Mon, 01 Dec 2025 22:09:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 22:11:02 +0000   Mon, 01 Dec 2025 22:09:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 22:11:02 +0000   Mon, 01 Dec 2025 22:10:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-188533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                6d0ec033-c4d5-41c6-890c-9daf4739fc8c
	  Boot ID:                    06dea43b-2aa1-4726-8bb8-0a198189349a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p9whp                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-188533                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-cwlgd                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-188533             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-188533    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-pff7q                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-188533             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-188533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-188533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node pause-188533 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 83s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-188533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-188533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-188533 status is now: NodeHasSufficientPID
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           78s                node-controller  Node pause-188533 event: Registered Node pause-188533 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-188533 status is now: NodeReady
	  Normal   RegisteredNode           14s                node-controller  Node pause-188533 event: Registered Node pause-188533 in Controller
	
	
	==> dmesg <==
	[ +32.789765] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:39] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:40] overlayfs: idmapped layers are currently not supported
	[  +3.421799] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:41] overlayfs: idmapped layers are currently not supported
	[ +28.971373] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:43] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:48] overlayfs: idmapped layers are currently not supported
	[ +29.317685] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:50] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:51] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:52] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:53] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:54] overlayfs: idmapped layers are currently not supported
	[  +2.710821] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:55] overlayfs: idmapped layers are currently not supported
	[ +23.922036] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:56] overlayfs: idmapped layers are currently not supported
	[ +26.428517] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:58] overlayfs: idmapped layers are currently not supported
	[Dec 1 21:59] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:01] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:02] overlayfs: idmapped layers are currently not supported
	[ +24.384212] overlayfs: idmapped layers are currently not supported
	[Dec 1 22:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7f474f99bf26bbeb5fd1570a1b4d07a037ededb0976f97b7cfaf0397b2fad3c9] <==
	{"level":"warn","ts":"2025-12-01T22:09:47.190467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.224941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.248586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.279674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.288830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.314253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:09:47.368479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40694","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-01T22:10:42.629037Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-01T22:10:42.629104Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-188533","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-01T22:10:42.629236Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T22:10:42.804631Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T22:10:42.804726Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T22:10:42.804748Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-01T22:10:42.804804Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-01T22:10:42.804866Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T22:10:42.804901Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-01T22:10:42.804909Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T22:10:42.804926Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-01T22:10:42.805043Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T22:10:42.805113Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-01T22:10:42.805122Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T22:10:42.808801Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-01T22:10:42.808963Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T22:10:42.809014Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-01T22:10:42.809025Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-188533","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [dd1313324144863644a9ae08f65ff0230b115b579027738e7ca471d556ff8723] <==
	{"level":"warn","ts":"2025-12-01T22:10:54.696784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.732996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.780118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.792815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.830260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.837708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.859367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.883441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.944803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:54.993003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.046231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.060892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.124680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.131380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.145249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.198244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.215647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.260303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.288028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.311774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.359354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.392470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.409291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.427722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T22:10:55.484914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39782","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:11:13 up  3:53,  0 user,  load average: 1.78, 2.03, 2.01
	Linux pause-188533 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [10a70a6f5dbc1cf1cdce344789e47ea80729baa80aa552017e27ca47b2227324] <==
	I1201 22:10:51.300153       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 22:10:51.300505       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1201 22:10:51.300666       1 main.go:148] setting mtu 1500 for CNI 
	I1201 22:10:51.300708       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 22:10:51.300751       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T22:10:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 22:10:51.506048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 22:10:51.506147       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 22:10:51.506189       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 22:10:51.507215       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 22:10:56.706860       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 22:10:56.706895       1 metrics.go:72] Registering metrics
	I1201 22:10:56.706952       1 controller.go:711] "Syncing nftables rules"
	I1201 22:11:01.506454       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 22:11:01.506510       1 main.go:301] handling current node
	I1201 22:11:11.509011       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 22:11:11.509062       1 main.go:301] handling current node
	
	
	==> kindnet [55f3da13e226d8e308b654b59a220e4446138e3b1a7e31583e355636f80f4a1e] <==
	I1201 22:09:57.102229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 22:09:57.102528       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1201 22:09:57.102717       1 main.go:148] setting mtu 1500 for CNI 
	I1201 22:09:57.102739       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 22:09:57.102759       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T22:09:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 22:09:57.302683       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 22:09:57.302761       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 22:09:57.302811       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 22:09:57.303193       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1201 22:10:27.303214       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1201 22:10:27.303226       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1201 22:10:27.303350       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1201 22:10:27.304624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1201 22:10:28.603001       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 22:10:28.603031       1 metrics.go:72] Registering metrics
	I1201 22:10:28.603105       1 controller.go:711] "Syncing nftables rules"
	I1201 22:10:37.308309       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 22:10:37.308357       1 main.go:301] handling current node
	
	
	==> kube-apiserver [23eb3c5f241ec3c013861dcfd53965dc750a523680455562b21e81f35d69d660] <==
	I1201 22:10:56.610396       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 22:10:56.610434       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1201 22:10:56.610646       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1201 22:10:56.610813       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1201 22:10:56.610877       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1201 22:10:56.611117       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 22:10:56.611419       1 aggregator.go:171] initial CRD sync complete...
	I1201 22:10:56.611438       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 22:10:56.611445       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 22:10:56.611452       1 cache.go:39] Caches are synced for autoregister controller
	I1201 22:10:56.611707       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1201 22:10:56.615766       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1201 22:10:56.615800       1 policy_source.go:240] refreshing policies
	I1201 22:10:56.628039       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1201 22:10:56.635536       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1201 22:10:56.641454       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 22:10:56.641582       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 22:10:56.647595       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1201 22:10:56.685216       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 22:10:57.315300       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 22:10:57.728122       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 22:10:59.217409       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 22:10:59.266499       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 22:10:59.414594       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 22:10:59.467175       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e] <==
	W1201 22:10:42.649383       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649445       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649457       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649508       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649522       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649695       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649796       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.649890       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.650958       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.650997       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651030       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651061       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651091       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651123       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651356       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651463       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651496       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651673       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651744       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651778       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651810       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651838       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.651869       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.652899       1 logging.go:55] [core] [Channel #18 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1201 22:10:42.652965       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [34ef219e4fc6b788dede490a784e4cdb33ebd117e7eae0c1ce37fbcbdfae616b] <==
	I1201 22:10:59.091848       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1201 22:10:59.091931       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-188533"
	I1201 22:10:59.091981       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1201 22:10:59.096367       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1201 22:10:59.100621       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1201 22:10:59.101774       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1201 22:10:59.104977       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1201 22:10:59.107364       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 22:10:59.107381       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1201 22:10:59.108595       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 22:10:59.108620       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1201 22:10:59.108895       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1201 22:10:59.108969       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 22:10:59.109489       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1201 22:10:59.111085       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1201 22:10:59.111261       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 22:10:59.111675       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 22:10:59.112839       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1201 22:10:59.114775       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1201 22:10:59.117814       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1201 22:10:59.117868       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 22:10:59.118011       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 22:10:59.118020       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 22:10:59.121391       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1201 22:10:59.127075       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810] <==
	I1201 22:09:55.098761       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1201 22:09:55.098832       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1201 22:09:55.098936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-188533"
	I1201 22:09:55.099014       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1201 22:09:55.099075       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 22:09:55.099535       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 22:09:55.101586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1201 22:09:55.102215       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1201 22:09:55.102541       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1201 22:09:55.102579       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1201 22:09:55.102856       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1201 22:09:55.102981       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1201 22:09:55.103083       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1201 22:09:55.103209       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1201 22:09:55.103291       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1201 22:09:55.103391       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 22:09:55.103451       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 22:09:55.103513       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 22:09:55.103546       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 22:09:55.103579       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 22:09:55.109560       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1201 22:09:55.110991       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 22:09:55.117950       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1201 22:09:55.124284       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-188533" podCIDRs=["10.244.0.0/24"]
	I1201 22:10:40.337282       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [125f9c98221f08aefc5c1d5767531793e6623aafdbbe2788da3f53d0b37c5b5f] <==
	I1201 22:10:51.222407       1 server_linux.go:53] "Using iptables proxy"
	I1201 22:10:52.696724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 22:10:56.743281       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 22:10:56.748363       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1201 22:10:56.748459       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 22:10:56.804625       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 22:10:56.804685       1 server_linux.go:132] "Using iptables Proxier"
	I1201 22:10:56.821901       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 22:10:56.822266       1 server.go:527] "Version info" version="v1.34.2"
	I1201 22:10:56.822323       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 22:10:56.832945       1 config.go:200] "Starting service config controller"
	I1201 22:10:56.833035       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 22:10:56.833080       1 config.go:106] "Starting endpoint slice config controller"
	I1201 22:10:56.833111       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 22:10:56.833150       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 22:10:56.833179       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 22:10:56.833916       1 config.go:309] "Starting node config controller"
	I1201 22:10:56.833989       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 22:10:56.834026       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 22:10:56.934151       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 22:10:56.934310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 22:10:56.934311       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f517468165e6149e74b6caf291fa53acbeda3290845551238b0ba8999a831f3a] <==
	I1201 22:09:57.270982       1 server_linux.go:53] "Using iptables proxy"
	I1201 22:09:57.352710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 22:09:57.453298       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 22:09:57.453428       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1201 22:09:57.453523       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 22:09:57.472836       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 22:09:57.472917       1 server_linux.go:132] "Using iptables Proxier"
	I1201 22:09:57.478564       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 22:09:57.478971       1 server.go:527] "Version info" version="v1.34.2"
	I1201 22:09:57.479002       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 22:09:57.484344       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 22:09:57.484437       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 22:09:57.484850       1 config.go:200] "Starting service config controller"
	I1201 22:09:57.484905       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 22:09:57.484960       1 config.go:106] "Starting endpoint slice config controller"
	I1201 22:09:57.484993       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 22:09:57.486636       1 config.go:309] "Starting node config controller"
	I1201 22:09:57.486661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 22:09:57.486670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 22:09:57.584661       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 22:09:57.585924       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 22:09:57.585994       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [829d829e6c58b92ed7c38ab13e650eb88fd6a1b9d086634b309b04cd79ffb2eb] <==
	E1201 22:09:48.129006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 22:09:48.129064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 22:09:48.129119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 22:09:48.129132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 22:09:48.129183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 22:09:48.129221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 22:09:48.129282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 22:09:48.129339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 22:09:48.129403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 22:09:48.129529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 22:09:48.129595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 22:09:48.956498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 22:09:48.957496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 22:09:49.015419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 22:09:49.040095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 22:09:49.072274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1201 22:09:49.136444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 22:09:49.229753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 22:09:49.276870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1201 22:09:51.802770       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 22:10:42.631838       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1201 22:10:42.632819       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1201 22:10:42.632838       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1201 22:10:42.632988       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1201 22:10:42.633004       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dd06372589408364b8b58065de97cbe55de7a24dfcbd37bfc9b061320d5e4539] <==
	I1201 22:10:53.439436       1 serving.go:386] Generated self-signed cert in-memory
	W1201 22:10:56.475614       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 22:10:56.475720       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 22:10:56.475756       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 22:10:56.475786       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 22:10:56.586473       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1201 22:10:56.586577       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 22:10:56.595842       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 22:10:56.596074       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 22:10:56.607252       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 22:10:56.596093       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 22:10:56.709746       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.092781    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="61f9f8cc8d13080efe9b6905f3567ed5" pod="kube-system/kube-apiserver-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.093095    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="31c592842db5e39ccc09d331cc027c0e" pod="kube-system/etcd-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: I1201 22:10:51.098557    1317 scope.go:117] "RemoveContainer" containerID="9d0eee6eaad1b35e670616323ad9a68b2e29bce0607e45617707c0e8878e234e"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.099321    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cwlgd\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0f8efb97-b10e-49da-8be5-e8bf43a2f0b7" pod="kube-system/kindnet-cwlgd"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.099674    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p9whp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1ae3303f-38c2-4928-9594-2bdc7c75b9e0" pod="kube-system/coredns-66bc5c9577-p9whp"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.099998    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="61f9f8cc8d13080efe9b6905f3567ed5" pod="kube-system/kube-apiserver-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.100308    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="31c592842db5e39ccc09d331cc027c0e" pod="kube-system/etcd-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.100603    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4dfcd21f9a4adc8f94b19842c00db276" pod="kube-system/kube-scheduler-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.100903    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pff7q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8e72b6d8-a889-4a4e-89d7-ef091e9af0bb" pod="kube-system/kube-proxy-pff7q"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: I1201 22:10:51.125630    1317 scope.go:117] "RemoveContainer" containerID="ee45ee63674dd2a1ab4e2fc4f3a39bf0a55bbc05e2ffd6adef12eb238ba3b810"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.126431    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="31c592842db5e39ccc09d331cc027c0e" pod="kube-system/etcd-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.126665    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bffb4fb097a12663dbdef28deabbe665" pod="kube-system/kube-controller-manager-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.127109    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4dfcd21f9a4adc8f94b19842c00db276" pod="kube-system/kube-scheduler-pause-188533"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.127655    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pff7q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8e72b6d8-a889-4a4e-89d7-ef091e9af0bb" pod="kube-system/kube-proxy-pff7q"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.128028    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cwlgd\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0f8efb97-b10e-49da-8be5-e8bf43a2f0b7" pod="kube-system/kindnet-cwlgd"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.128363    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p9whp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1ae3303f-38c2-4928-9594-2bdc7c75b9e0" pod="kube-system/coredns-66bc5c9577-p9whp"
	Dec 01 22:10:51 pause-188533 kubelet[1317]: E1201 22:10:51.129323    1317 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-188533\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="61f9f8cc8d13080efe9b6905f3567ed5" pod="kube-system/kube-apiserver-pause-188533"
	Dec 01 22:10:56 pause-188533 kubelet[1317]: E1201 22:10:56.460307    1317 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-188533\" is forbidden: User \"system:node:pause-188533\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-188533' and this object" podUID="bffb4fb097a12663dbdef28deabbe665" pod="kube-system/kube-controller-manager-pause-188533"
	Dec 01 22:10:56 pause-188533 kubelet[1317]: E1201 22:10:56.460489    1317 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-188533\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-188533' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 01 22:10:56 pause-188533 kubelet[1317]: E1201 22:10:56.460614    1317 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-188533\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-188533' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 01 22:10:56 pause-188533 kubelet[1317]: E1201 22:10:56.529657    1317 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-188533\" is forbidden: User \"system:node:pause-188533\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-188533' and this object" podUID="4dfcd21f9a4adc8f94b19842c00db276" pod="kube-system/kube-scheduler-pause-188533"
	Dec 01 22:11:01 pause-188533 kubelet[1317]: W1201 22:11:01.050183    1317 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 01 22:11:07 pause-188533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 22:11:07 pause-188533 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 22:11:07 pause-188533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-188533 -n pause-188533
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-188533 -n pause-188533: exit status 2 (379.752614ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-188533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7200.066s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1201 22:35:34.916695  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1201 22:36:02.450549  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1201 22:36:54.822826  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/old-k8s-version-760020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1201 22:37:25.526018  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (25m43s)
		TestStartStop (28m18s)
		TestStartStop/group/newest-cni (12m57s)
		TestStartStop/group/newest-cni/serial (12m57s)
		TestStartStop/group/newest-cni/serial/SecondStart (3m2s)
		TestStartStop/group/no-preload (19m28s)
		TestStartStop/group/no-preload/serial (19m28s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (3m2s)

                                                
                                                
goroutine 5812 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000100fc0, 0x400045dbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x40006d0060, {0x534c580, 0x2c, 0x2c}, {0x400045dd08?, 0x125774?, 0x5374f80?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x400067ed20)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x400067ed20)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 5220 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016af680, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5199
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 191 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 190
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4826 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x40006e43c0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40014aa8c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40014aa8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40014aa8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40014aa8c0, 0x4001c6d080)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4680
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4741 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x40006e43c0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016d3500)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016d3500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016d3500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016d3500, 0x4001c6c780)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4680
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 186 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40018e5140, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 190 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x40000822a0}, 0x40013db740, 0x40014a7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x40000822a0}, 0x20?, 0x40013db740, 0x40013db788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x40000822a0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001871280?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 186
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 189 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001871390, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001871380)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40018e5140)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400199e850?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x40000822a0?}, 0x40000a4ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x40000822a0}, 0x400076df38, {0x369d680, 0x40013e7a70}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40000a4fa8?, {0x369d680?, 0x40013e7a70?}, 0x80?, 0x4000496c38?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001970a50, 0x3b9aca00, 0x0, 0x1, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 186
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 2845 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4000781250, 0x22)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000781240)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001a1ab40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40000a4718?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x40000822a0?}, 0x40000a46a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x40000822a0}, 0x4001504f38, {0x369d680, 0x40006a4300}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x40006a4300?}, 0x70?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001685300, 0x3b9aca00, 0x0, 0x1, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2866
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 2847 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2846
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 185 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x40019c22a0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4680 [chan receive, 26 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40016d2c40, 0x4001562570)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4373
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2270 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x400137e180, 0x40015501c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 745
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 2846 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x40000822a0}, 0x40014c2f40, 0x40013b9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x40000822a0}, 0x90?, 0x40014c2f40, 0x40014c2f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x40000822a0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40002c7e00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2866
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 617 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff54223c00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40004c0480?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40004c0480)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40004c0480)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40019dca80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40019dca80)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000152e00, {0x36d3120, 0x40019dca80})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000152e00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 615
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 5434 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5433
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4373 [chan receive, 26 minutes]:
testing.(*T).Run(0x40014aa1c0, {0x296d53a?, 0xbf65396e9f4?}, 0x4001562570)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40014aa1c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x40014aa1c0, 0x339b500)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 817 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x400070e610, 0x2a)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400070e600)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001591e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400037ae70?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x40000822a0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x40000822a0}, 0x40014dbf38, {0x369d680, 0x40019721b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x40019721b0?}, 0xd0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019d6df0, 0x3b9aca00, 0x0, 0x1, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 808
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 808 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001591e00, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 806
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5441 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e5778, 0x400030b7a0}, {0x36d3780, 0x400040ce80}, 0x1, 0x0, 0x40000dbbe0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e5778?, 0x4000295730?}, 0x3b9aca00, 0x40000dbe08?, 0x1, 0x40000dbbe0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e5778, 0x4000295730}, 0x40017e0380, {0x40017dc288, 0x11}, {0x2993faf, 0x14}, {0x29abe76, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:379 +0x22c
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36e5778, 0x4000295730}, 0x40017e0380, {0x40017dc288, 0x11}, {0x297850c?, 0x1cc0faac00161e84?}, {0x692e17f9?, 0x40014a4f58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:272 +0xf8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40017e0380?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40017e0380, 0x4001c6c180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4998
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4646 [chan receive, 20 minutes]:
testing.(*T).Run(0x400085cfc0, {0x296e9ac?, 0x0?}, 0x400073d180)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x400085cfc0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x400085cfc0, 0x4001868300)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4642
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2166 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x40002c7080, 0x4001652fc0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2165
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 2865 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x400085c700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2864
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 2212 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x400139e180, 0x4000083110)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2211
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5440 [IO wait]:
internal/poll.runtime_pollWait(0xffff53dd7400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001998600?, 0x4001d23178?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001998600, {0x4001d23178, 0x2e88, 0x2e88})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400078e188, {0x4001d23178?, 0x400009ed68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400134c4e0, {0x369ba58, 0x4000110ff8})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x400134c4e0}, {0x369ba58, 0x4000110ff8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400078e188?, {0x369bc40, 0x400134c4e0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400078e188, {0x369bc40, 0x400134c4e0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x400134c4e0}, {0x369bad8, 0x400078e188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x400009ef90?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5438
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5219 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x40013f3080?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5199
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5432 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40019dc490, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40019dc480)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400194ad20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400030b5e0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x40000822a0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x40000822a0}, 0x4001509f38, {0x369d680, 0x400134c060}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x400134c060?}, 0xe0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019d6090, 0x3b9aca00, 0x0, 0x1, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5443
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4744 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x40006e43c0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016d3a40)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016d3a40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016d3a40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016d3a40, 0x4001c6c900)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4680
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5443 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400194ad20, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5441
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4742 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x40006e43c0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016d36c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016d36c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016d36c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016d36c0, 0x4001c6c800)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4680
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3193 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0x4001512900, 0x4001653500)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 3192
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5225 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5224
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 2866 [chan receive, 71 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001a1ab40, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2864
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5439 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0xffff54223e00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001998540?, 0x400157d362?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001998540, {0x400157d362, 0x49e, 0x49e})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400078e160, {0x400157d362?, 0x40015d6d68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400134c4b0, {0x369ba58, 0x4000110fe8})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x400134c4b0}, {0x369ba58, 0x4000110fe8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400078e160?, {0x369bc40, 0x400134c4b0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400078e160, {0x369bc40, 0x400134c4b0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x400134c4b0}, {0x369bad8, 0x400078e160}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40014b4fc0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5438
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5442 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x40017e0380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5441
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5433 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x40000822a0}, 0x40015daf40, 0x40015daf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x40000822a0}, 0xe8?, 0x40015daf40, 0x40015daf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x40000822a0?}, 0x40002c7800?, 0x400049adc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40002c6600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5443
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5224 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x40000822a0}, 0x40013dcf40, 0x40014a3f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x40000822a0}, 0xf8?, 0x40013dcf40, 0x40013dcf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x40000822a0?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40003a1a00?, 0x95c64?, 0x40017e0700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5220
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4644 [chan receive, 13 minutes]:
testing.(*T).Run(0x400085c700, {0x296e9ac?, 0x0?}, 0x400073d300)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x400085c700)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x400085c700, 0x4001868200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4642
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5223 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4000735ed0, 0x12)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000735ec0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016af680)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002f70a0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x40000822a0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x40000822a0}, 0x4000769f38, {0x369d680, 0x40018a6420}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x40018a6420?}, 0x30?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016ed7d0, 0x3b9aca00, 0x0, 0x1, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5220
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 2482 [IO wait, 99 minutes]:
internal/poll.runtime_pollWait(0xffff54224000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400073d380?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400073d380)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400073d380)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4000780340)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4000780340)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40000f8a00, {0x36d3120, 0x4000780340})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40000f8a00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 2480
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 5457 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0x4000692180, 0x40014bcc40)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5438
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 818 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x40000822a0}, 0x40014c2f40, 0x40014a5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x40000822a0}, 0xc8?, 0x40014c2f40, 0x40014c2f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x40000822a0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001512180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 808
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 807 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x400149cc00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 806
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3276 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0x40002c7380, 0x40000836c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2652
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 819 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 818
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4825 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x40006e43c0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016d3dc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016d3dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016d3dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016d3dc0, 0x4001c6d000)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4680
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4452 [chan receive, 29 minutes]:
testing.(*T).Run(0x40014aafc0, {0x296d53a?, 0x4001355f58?}, 0x339b730)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x40014aafc0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x40014aafc0, 0x339b548)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4940 [chan receive, 22 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40018e5a40, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4935
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4681 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x40006e43c0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016d2fc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016d2fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016d2fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016d2fc0, 0x4001c6c200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4680
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4926 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40007359d0, 0x14)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40007359c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40018e5a40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40014bcee0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x40000822a0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x40000822a0}, 0x40013b5f38, {0x369d680, 0x40016805d0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369d680?, 0x40016805d0?}, 0x70?, 0x40012ff380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019d6750, 0x3b9aca00, 0x0, 0x1, 0x40000822a0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4940
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4939 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x4000482180?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4935
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3231 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0x4001513b00, 0x4001550460)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 3230
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4743 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x40006e43c0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016d3880)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016d3880)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016d3880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016d3880, 0x4001c6c880)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4680
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5438 [syscall, 3 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x11, 0x4001351b18, 0x4, 0x40017c83f0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x4001351c78?, 0x1929a0?, 0xffffddeea19e?, 0x0?, 0x400154f520?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4001870180)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x4001351c48?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4000692180)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4000692180)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x40014b4fc0, 0x4000692180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.validateSecondStart({0x36e5778, 0x40004aecb0}, 0x40014b4fc0, {0x40017dc2d0, 0x11}, {0x2f45e6e1?, 0x2f45e6e100161e84?}, {0x692e17f9?, 0x4001351f58?}, {0x40000f8600?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0x90
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40014b4fc0?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40014b4fc0, 0x400073d480)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5347
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4642 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x400085c1c0, 0x339b730)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4452
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2360 [select, 99 minutes]:
net/http.(*persistConn).readLoop(0x40017c0240)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 2358
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 2361 [select, 99 minutes]:
net/http.(*persistConn).writeLoop(0x40017c0240)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 2358
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 4927 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x40000822a0}, 0x40014c0740, 0x40013b2f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x40000822a0}, 0xd8?, 0x40014c0740, 0x40014c0788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x40000822a0?}, 0x4000692300?, 0x4001be2640?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000692780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4940
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5347 [chan receive, 3 minutes]:
testing.(*T).Run(0x40016d2a80, {0x297a643?, 0x40000006ee?}, 0x400073d480)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40016d2a80)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40016d2a80, 0x400073d300)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4644
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4928 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4927
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4998 [chan receive, 3 minutes]:
testing.(*T).Run(0x40017e0000, {0x2999fbc?, 0x40000006ee?}, 0x4001c6c180)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40017e0000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40017e0000, 0x400073d180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4646
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                    

Test pass (224/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.47
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.31
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.18
12 TestDownloadOnly/v1.34.2/json-events 6.96
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.88
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.68
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
36 TestAddons/Setup 155.03
40 TestAddons/serial/GCPAuth/Namespaces 0.22
41 TestAddons/serial/GCPAuth/FakeCredentials 8.83
57 TestAddons/StoppedEnableDisable 12.54
58 TestCertOptions 35.91
59 TestCertExpiration 329.94
61 TestForceSystemdFlag 35.42
62 TestForceSystemdEnv 33.33
67 TestErrorSpam/setup 31.51
68 TestErrorSpam/start 0.8
69 TestErrorSpam/status 1.13
70 TestErrorSpam/pause 6.25
71 TestErrorSpam/unpause 5.88
72 TestErrorSpam/stop 1.51
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 77.98
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 29.29
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.6
84 TestFunctional/serial/CacheCmd/cache/add_local 1.14
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.15
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
92 TestFunctional/serial/ExtraConfig 45.3
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.68
95 TestFunctional/serial/LogsFileCmd 1.59
96 TestFunctional/serial/InvalidService 4.24
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 10.22
100 TestFunctional/parallel/DryRun 0.53
101 TestFunctional/parallel/InternationalLanguage 0.26
102 TestFunctional/parallel/StatusCmd 1.25
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 25.87
110 TestFunctional/parallel/SSHCmd 0.7
111 TestFunctional/parallel/CpCmd 2.5
113 TestFunctional/parallel/FileSync 0.38
114 TestFunctional/parallel/CertSync 2.18
118 TestFunctional/parallel/NodeLabels 0.11
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
122 TestFunctional/parallel/License 0.3
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.48
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
136 TestFunctional/parallel/ProfileCmd/profile_list 0.45
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
138 TestFunctional/parallel/MountCmd/any-port 7.7
139 TestFunctional/parallel/MountCmd/specific-port 2.16
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.34
141 TestFunctional/parallel/ServiceCmd/List 0.65
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
146 TestFunctional/parallel/Version/short 0.09
147 TestFunctional/parallel/Version/components 0.71
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.23
153 TestFunctional/parallel/ImageCommands/Setup 0.65
157 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
158 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
159 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.6
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.04
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.08
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.84
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.16
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.96
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.04
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.24
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.24
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.73
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.19
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.37
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.4
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.75
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.38
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.11
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.66
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.25
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.68
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.3
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.16
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.16
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.19
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.66
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.42
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.01
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.32
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 209.73
265 TestMultiControlPlane/serial/DeployApp 6.82
266 TestMultiControlPlane/serial/PingHostFromPods 1.51
267 TestMultiControlPlane/serial/AddWorkerNode 60
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.13
270 TestMultiControlPlane/serial/CopyFile 21.05
271 TestMultiControlPlane/serial/StopSecondaryNode 13.01
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
273 TestMultiControlPlane/serial/RestartSecondaryNode 111.05
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.2
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 121.36
276 TestMultiControlPlane/serial/DeleteSecondaryNode 10.62
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
278 TestMultiControlPlane/serial/StopCluster 36.21
279 TestMultiControlPlane/serial/RestartCluster 67.55
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.84
281 TestMultiControlPlane/serial/AddSecondaryNode 93.17
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
287 TestJSONOutput/start/Command 78.48
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.87
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 63.07
313 TestKicCustomNetwork/use_default_bridge_network 35.66
314 TestKicExistingNetwork 38.15
315 TestKicCustomSubnet 35.15
316 TestKicStaticIP 37.55
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 68.6
321 TestMountStart/serial/StartWithMountFirst 6.39
322 TestMountStart/serial/VerifyMountFirst 0.29
323 TestMountStart/serial/StartWithMountSecond 8.93
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.74
326 TestMountStart/serial/VerifyMountPostDelete 0.29
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 8.22
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 139.5
333 TestMultiNode/serial/DeployApp2Nodes 5.94
334 TestMultiNode/serial/PingHostFrom2Pods 0.98
335 TestMultiNode/serial/AddNode 58.29
336 TestMultiNode/serial/MultiNodeLabels 0.1
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 10.8
339 TestMultiNode/serial/StopNode 2.48
340 TestMultiNode/serial/StartAfterStop 8.28
341 TestMultiNode/serial/RestartKeepsNodes 75.3
342 TestMultiNode/serial/DeleteNode 5.85
343 TestMultiNode/serial/StopMultiNode 24.11
344 TestMultiNode/serial/RestartMultiNode 54.33
345 TestMultiNode/serial/ValidateNameConflict 35.73
350 TestPreload 122.14
352 TestScheduledStopUnix 107.16
355 TestInsufficientStorage 13
356 TestRunningBinaryUpgrade 61.72
359 TestMissingContainerUpgrade 139.28
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 42.39
363 TestNoKubernetes/serial/StartWithStopK8s 7.29
364 TestNoKubernetes/serial/Start 9.53
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
367 TestNoKubernetes/serial/ProfileList 1.31
368 TestNoKubernetes/serial/Stop 1.42
369 TestNoKubernetes/serial/StartNoArgs 7.93
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
371 TestStoppedBinaryUpgrade/Setup 1.91
372 TestStoppedBinaryUpgrade/Upgrade 304.09
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.76
382 TestPause/serial/Start 83.78
383 TestPause/serial/SecondStartNoReconfiguration 26.37
x
+
TestDownloadOnly/v1.28.0/json-events (9.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-111439 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-111439 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.471383916s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1201 20:37:44.788309  486002 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1201 20:37:44.788387  486002 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-111439
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-111439: exit status 85 (96.085335ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-111439 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-111439 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:37:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:37:35.362155  486008 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:37:35.362278  486008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:35.362290  486008 out.go:374] Setting ErrFile to fd 2...
	I1201 20:37:35.362297  486008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:35.362557  486008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	W1201 20:37:35.362693  486008 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-482752/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-482752/.minikube/config/config.json: no such file or directory
	I1201 20:37:35.363118  486008 out.go:368] Setting JSON to true
	I1201 20:37:35.363992  486008 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8405,"bootTime":1764613051,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 20:37:35.364060  486008 start.go:143] virtualization:  
	I1201 20:37:35.369692  486008 out.go:99] [download-only-111439] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1201 20:37:35.369893  486008 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball: no such file or directory
	I1201 20:37:35.370016  486008 notify.go:221] Checking for updates...
	I1201 20:37:35.373631  486008 out.go:171] MINIKUBE_LOCATION=21997
	I1201 20:37:35.377027  486008 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:37:35.380211  486008 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:37:35.383291  486008 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 20:37:35.386590  486008 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1201 20:37:35.392781  486008 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 20:37:35.393041  486008 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:37:35.416561  486008 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 20:37:35.416682  486008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:35.494367  486008 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-01 20:37:35.484728521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:35.494492  486008 docker.go:319] overlay module found
	I1201 20:37:35.497723  486008 out.go:99] Using the docker driver based on user configuration
	I1201 20:37:35.497785  486008 start.go:309] selected driver: docker
	I1201 20:37:35.497800  486008 start.go:927] validating driver "docker" against <nil>
	I1201 20:37:35.497975  486008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:35.555053  486008 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-01 20:37:35.545114915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:35.555272  486008 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 20:37:35.555638  486008 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1201 20:37:35.555822  486008 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 20:37:35.559021  486008 out.go:171] Using Docker driver with root privileges
	I1201 20:37:35.562042  486008 cni.go:84] Creating CNI manager for ""
	I1201 20:37:35.562126  486008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:37:35.562142  486008 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 20:37:35.562237  486008 start.go:353] cluster config:
	{Name:download-only-111439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-111439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:37:35.565278  486008 out.go:99] Starting "download-only-111439" primary control-plane node in "download-only-111439" cluster
	I1201 20:37:35.565299  486008 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:37:35.568197  486008 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:37:35.568251  486008 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1201 20:37:35.568299  486008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:37:35.587250  486008 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:37:35.587278  486008 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1201 20:37:35.587483  486008 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1201 20:37:35.587592  486008 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1201 20:37:35.636343  486008 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1201 20:37:35.636392  486008 cache.go:65] Caching tarball of preloaded images
	I1201 20:37:35.636656  486008 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1201 20:37:35.640350  486008 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1201 20:37:35.640413  486008 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1201 20:37:35.731798  486008 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1201 20:37:35.731945  486008 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-111439 host does not exist
	  To start a cluster, run: "minikube start -p download-only-111439"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-111439
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (6.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-000800 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-000800 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.961665959s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (6.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1201 20:37:52.334068  486002 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1201 20:37:52.334105  486002 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-000800
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-000800: exit status 85 (93.257226ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-111439 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-111439 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ delete  │ -p download-only-111439                                                                                                                                                   │ download-only-111439 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ start   │ -o=json --download-only -p download-only-000800 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-000800 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:37:45
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:37:45.415732  486209 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:37:45.415957  486209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:45.415986  486209 out.go:374] Setting ErrFile to fd 2...
	I1201 20:37:45.416010  486209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:45.416459  486209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:37:45.417029  486209 out.go:368] Setting JSON to true
	I1201 20:37:45.418317  486209 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8415,"bootTime":1764613051,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 20:37:45.418406  486209 start.go:143] virtualization:  
	I1201 20:37:45.422083  486209 out.go:99] [download-only-000800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 20:37:45.422377  486209 notify.go:221] Checking for updates...
	I1201 20:37:45.425338  486209 out.go:171] MINIKUBE_LOCATION=21997
	I1201 20:37:45.428358  486209 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:37:45.431422  486209 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:37:45.434439  486209 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 20:37:45.437381  486209 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1201 20:37:45.443315  486209 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 20:37:45.443604  486209 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:37:45.469562  486209 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 20:37:45.469680  486209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:45.528196  486209 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-01 20:37:45.518435353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:45.528305  486209 docker.go:319] overlay module found
	I1201 20:37:45.531320  486209 out.go:99] Using the docker driver based on user configuration
	I1201 20:37:45.531361  486209 start.go:309] selected driver: docker
	I1201 20:37:45.531373  486209 start.go:927] validating driver "docker" against <nil>
	I1201 20:37:45.531495  486209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:45.586301  486209 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-01 20:37:45.577150441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:45.586467  486209 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 20:37:45.586743  486209 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1201 20:37:45.586893  486209 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 20:37:45.589984  486209 out.go:171] Using Docker driver with root privileges
	I1201 20:37:45.593042  486209 cni.go:84] Creating CNI manager for ""
	I1201 20:37:45.593128  486209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:37:45.593143  486209 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 20:37:45.593232  486209 start.go:353] cluster config:
	{Name:download-only-000800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-000800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:37:45.596190  486209 out.go:99] Starting "download-only-000800" primary control-plane node in "download-only-000800" cluster
	I1201 20:37:45.596224  486209 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:37:45.599199  486209 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:37:45.599262  486209 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:37:45.599486  486209 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:37:45.619758  486209 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:37:45.619780  486209 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1201 20:37:45.619878  486209 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1201 20:37:45.619902  486209 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1201 20:37:45.619906  486209 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1201 20:37:45.619914  486209 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1201 20:37:45.690058  486209 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1201 20:37:45.690084  486209 cache.go:65] Caching tarball of preloaded images
	I1201 20:37:45.690276  486209 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:37:45.693473  486209 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1201 20:37:45.693515  486209 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1201 20:37:45.792088  486209 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1201 20:37:45.792149  486209 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/21997-482752/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-000800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-000800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-000800
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-193191 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-193191 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.881669146s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-193191
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-193191: exit status 85 (84.825208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-111439 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-111439 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ delete  │ -p download-only-111439                                                                                                                                                          │ download-only-111439 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ start   │ -o=json --download-only -p download-only-000800 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-000800 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ delete  │ -p download-only-000800                                                                                                                                                          │ download-only-000800 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │ 01 Dec 25 20:37 UTC │
	│ start   │ -o=json --download-only -p download-only-193191 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-193191 │ jenkins │ v1.37.0 │ 01 Dec 25 20:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:37:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:37:52.819742  486409 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:37:52.819937  486409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:52.819949  486409 out.go:374] Setting ErrFile to fd 2...
	I1201 20:37:52.819955  486409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:37:52.820226  486409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:37:52.820638  486409 out.go:368] Setting JSON to true
	I1201 20:37:52.821454  486409 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8422,"bootTime":1764613051,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 20:37:52.821522  486409 start.go:143] virtualization:  
	I1201 20:37:52.824990  486409 out.go:99] [download-only-193191] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 20:37:52.825291  486409 notify.go:221] Checking for updates...
	I1201 20:37:52.829146  486409 out.go:171] MINIKUBE_LOCATION=21997
	I1201 20:37:52.832926  486409 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:37:52.835779  486409 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:37:52.838793  486409 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 20:37:52.841645  486409 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1201 20:37:52.847246  486409 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 20:37:52.847553  486409 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:37:52.878668  486409 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 20:37:52.878797  486409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:52.951020  486409 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-01 20:37:52.940980684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:52.951175  486409 docker.go:319] overlay module found
	I1201 20:37:52.954344  486409 out.go:99] Using the docker driver based on user configuration
	I1201 20:37:52.954392  486409 start.go:309] selected driver: docker
	I1201 20:37:52.954405  486409 start.go:927] validating driver "docker" against <nil>
	I1201 20:37:52.954536  486409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:37:53.017084  486409 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-01 20:37:53.006738177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:37:53.017255  486409 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 20:37:53.017554  486409 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1201 20:37:53.017708  486409 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 20:37:53.020917  486409 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-193191 host does not exist
	  To start a cluster, run: "minikube start -p download-only-193191"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-193191
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1201 20:37:58.199672  486002 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-986055 --alsologtostderr --binary-mirror http://127.0.0.1:41593 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-986055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-986055
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-947185
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-947185: exit status 85 (89.113968ms)

                                                
                                                
-- stdout --
	* Profile "addons-947185" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-947185"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-947185
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-947185: exit status 85 (99.599976ms)

                                                
                                                
-- stdout --
	* Profile "addons-947185" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-947185"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (155.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-947185 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-947185 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m35.02671698s)
--- PASS: TestAddons/Setup (155.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-947185 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-947185 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-947185 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-947185 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d41567da-65af-4b0c-b8a8-fb88bc0b0497] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d41567da-65af-4b0c-b8a8-fb88bc0b0497] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003637191s
addons_test.go:694: (dbg) Run:  kubectl --context addons-947185 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-947185 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-947185 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-947185 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-947185
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-947185: (12.224673306s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-947185
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-947185
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-947185
--- PASS: TestAddons/StoppedEnableDisable (12.54s)

                                                
                                    
x
+
TestCertOptions (35.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-019907 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1201 22:15:34.916548  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-019907 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.923647132s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-019907 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-019907 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-019907 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-019907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-019907
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-019907: (2.191606373s)
--- PASS: TestCertOptions (35.91s)

                                                
                                    
x
+
TestCertExpiration (329.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-663052 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1201 22:12:52.876844  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-663052 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.588918677s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-663052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-663052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m51.500462415s)
helpers_test.go:175: Cleaning up "cert-expiration-663052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-663052
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-663052: (2.849532547s)
--- PASS: TestCertExpiration (329.94s)

                                                
                                    
x
+
TestForceSystemdFlag (35.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-734785 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-734785 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.533450711s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-734785 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-734785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-734785
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-734785: (2.564697835s)
--- PASS: TestForceSystemdFlag (35.42s)

                                                
                                    
x
+
TestForceSystemdEnv (33.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-065520 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-065520 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.810782243s)
helpers_test.go:175: Cleaning up "force-systemd-env-065520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-065520
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-065520: (2.523546814s)
--- PASS: TestForceSystemdEnv (33.33s)

                                                
                                    
x
+
TestErrorSpam/setup (31.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-386789 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-386789 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-386789 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-386789 --driver=docker  --container-runtime=crio: (31.514843443s)
--- PASS: TestErrorSpam/setup (31.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (6.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 pause: exit status 80 (1.856016895s)

                                                
                                                
-- stdout --
	* Pausing node nospam-386789 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:44:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 pause: exit status 80 (2.22851494s)

                                                
                                                
-- stdout --
	* Pausing node nospam-386789 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:44:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 pause: exit status 80 (2.160041338s)

                                                
                                                
-- stdout --
	* Pausing node nospam-386789 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:44:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 unpause: exit status 80 (1.803900946s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-386789 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:44:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 unpause: exit status 80 (1.863667298s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-386789 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:44:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 unpause: exit status 80 (2.214638185s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-386789 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:44:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.88s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 stop: (1.304784519s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-386789 --log_dir /tmp/nospam-386789 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074555 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1201 20:45:34.916807  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:34.923585  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:34.935027  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:34.956542  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:34.998003  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:35.079530  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:35.241111  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:35.562788  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:36.204881  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:37.487244  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:40.048578  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:45.171215  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:45:55.412693  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:46:15.894393  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-074555 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.980677942s)
--- PASS: TestFunctional/serial/StartWithProxy (77.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1201 20:46:21.199622  486002 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074555 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-074555 --alsologtostderr -v=8: (29.290339082s)
functional_test.go:678: soft start took 29.293620097s for "functional-074555" cluster.
I1201 20:46:50.490291  486002 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (29.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-074555 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-074555 cache add registry.k8s.io/pause:3.1: (1.278612622s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-074555 cache add registry.k8s.io/pause:3.3: (1.210767813s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-074555 cache add registry.k8s.io/pause:latest: (1.109880259s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-074555 /tmp/TestFunctionalserialCacheCmdcacheadd_local1376561258/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 cache add minikube-local-cache-test:functional-074555
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 cache delete minikube-local-cache-test:functional-074555
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-074555
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.952282ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 cache reload
E1201 20:46:56.855973  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 kubectl -- --context functional-074555 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-074555 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074555 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-074555 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.29955149s)
functional_test.go:776: restart took 45.299654109s for "functional-074555" cluster.
I1201 20:47:43.402090  486002 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (45.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-074555 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-074555 logs: (1.681224038s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 logs --file /tmp/TestFunctionalserialLogsFileCmd3697419911/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-074555 logs --file /tmp/TestFunctionalserialLogsFileCmd3697419911/001/logs.txt: (1.59110318s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-074555 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-074555
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-074555: exit status 115 (405.284986ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30638 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-074555 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 config get cpus: exit status 14 (73.420453ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 config get cpus: exit status 14 (83.311888ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-074555 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-074555 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 512688: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074555 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-074555 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (253.859634ms)

                                                
                                                
-- stdout --
	* [functional-074555] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:58:20.543082  512181 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:58:20.543240  512181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:58:20.543252  512181 out.go:374] Setting ErrFile to fd 2...
	I1201 20:58:20.543259  512181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:58:20.543493  512181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:58:20.543870  512181 out.go:368] Setting JSON to false
	I1201 20:58:20.547793  512181 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":9650,"bootTime":1764613051,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 20:58:20.547971  512181 start.go:143] virtualization:  
	I1201 20:58:20.551227  512181 out.go:179] * [functional-074555] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 20:58:20.555103  512181 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:58:20.555210  512181 notify.go:221] Checking for updates...
	I1201 20:58:20.561952  512181 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:58:20.564871  512181 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:58:20.567722  512181 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 20:58:20.570577  512181 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 20:58:20.573423  512181 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:58:20.576776  512181 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:58:20.577338  512181 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:58:20.612988  512181 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 20:58:20.613102  512181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:58:20.698507  512181 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 20:58:20.684065698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:58:20.698608  512181 docker.go:319] overlay module found
	I1201 20:58:20.701563  512181 out.go:179] * Using the docker driver based on existing profile
	I1201 20:58:20.709722  512181 start.go:309] selected driver: docker
	I1201 20:58:20.709751  512181 start.go:927] validating driver "docker" against &{Name:functional-074555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-074555 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:58:20.709848  512181 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:58:20.713399  512181 out.go:203] 
	W1201 20:58:20.716579  512181 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1201 20:58:20.719740  512181 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074555 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074555 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-074555 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (257.693562ms)

                                                
                                                
-- stdout --
	* [functional-074555] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:58:20.290075  512091 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:58:20.290314  512091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:58:20.290342  512091 out.go:374] Setting ErrFile to fd 2...
	I1201 20:58:20.290362  512091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:58:20.290781  512091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 20:58:20.291230  512091 out.go:368] Setting JSON to false
	I1201 20:58:20.293692  512091 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":9650,"bootTime":1764613051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 20:58:20.293804  512091 start.go:143] virtualization:  
	I1201 20:58:20.297519  512091 out.go:179] * [functional-074555] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1201 20:58:20.301234  512091 notify.go:221] Checking for updates...
	I1201 20:58:20.304822  512091 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:58:20.308027  512091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:58:20.310835  512091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 20:58:20.315235  512091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 20:58:20.318157  512091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 20:58:20.321168  512091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:58:20.324616  512091 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:58:20.325216  512091 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:58:20.361681  512091 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 20:58:20.361791  512091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:58:20.446818  512091 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 20:58:20.436511659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 20:58:20.446927  512091 docker.go:319] overlay module found
	I1201 20:58:20.450074  512091 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1201 20:58:20.452906  512091 start.go:309] selected driver: docker
	I1201 20:58:20.452931  512091 start.go:927] validating driver "docker" against &{Name:functional-074555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-074555 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:58:20.453023  512091 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:58:20.456438  512091 out.go:203] 
	W1201 20:58:20.459364  512091 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1201 20:58:20.462279  512091 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4106e937-5606-438b-bc9e-b9c8b32f2832] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0033951s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-074555 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-074555 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-074555 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-074555 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [015bb0d4-736e-48b4-9a7e-4b6c1bc52fb1] Pending
helpers_test.go:352: "sp-pod" [015bb0d4-736e-48b4-9a7e-4b6c1bc52fb1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [015bb0d4-736e-48b4-9a7e-4b6c1bc52fb1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00407706s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-074555 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-074555 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-074555 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ea76a26b-6513-42c5-8190-8a9a1729eb26] Pending
helpers_test.go:352: "sp-pod" [ea76a26b-6513-42c5-8190-8a9a1729eb26] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ea76a26b-6513-42c5-8190-8a9a1729eb26] Running
E1201 20:48:18.778207  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0041486s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-074555 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh -n functional-074555 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 cp functional-074555:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3796138896/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh -n functional-074555 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh -n functional-074555 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/486002/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo cat /etc/test/nested/copy/486002/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/486002.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo cat /etc/ssl/certs/486002.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/486002.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo cat /usr/share/ca-certificates/486002.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4860022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo cat /etc/ssl/certs/4860022.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4860022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo cat /usr/share/ca-certificates/4860022.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-074555 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 ssh "sudo systemctl is-active docker": exit status 1 (372.122503ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 ssh "sudo systemctl is-active containerd": exit status 1 (356.214521ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-074555 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-074555 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-074555 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 508781: os: process already finished
helpers_test.go:519: unable to terminate pid 508569: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-074555 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-074555 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-074555 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [9728056e-6be4-4120-8eb6-a0238ab032ff] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [9728056e-6be4-4120-8eb6-a0238ab032ff] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.005264286s
I1201 20:48:02.358225  486002 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-074555 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.187.170 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-074555 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "392.059389ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.006748ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "382.828909ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "59.941805ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdany-port2531474451/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764622687762847085" to /tmp/TestFunctionalparallelMountCmdany-port2531474451/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764622687762847085" to /tmp/TestFunctionalparallelMountCmdany-port2531474451/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764622687762847085" to /tmp/TestFunctionalparallelMountCmdany-port2531474451/001/test-1764622687762847085
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.31241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 20:58:08.130476  486002 retry.go:31] will retry after 268.692237ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  1 20:58 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  1 20:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  1 20:58 test-1764622687762847085
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh cat /mount-9p/test-1764622687762847085
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-074555 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b9eac74c-8543-43ab-a81f-c64bde2d5ec4] Pending
helpers_test.go:352: "busybox-mount" [b9eac74c-8543-43ab-a81f-c64bde2d5ec4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b9eac74c-8543-43ab-a81f-c64bde2d5ec4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b9eac74c-8543-43ab-a81f-c64bde2d5ec4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00372123s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-074555 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdany-port2531474451/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdspecific-port3394827730/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (375.440832ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 20:58:15.835463  486002 retry.go:31] will retry after 740.794465ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdspecific-port3394827730/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 ssh "sudo umount -f /mount-9p": exit status 1 (281.960206ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-074555 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdspecific-port3394827730/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560449507/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560449507/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560449507/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-074555 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560449507/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560449507/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074555 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2560449507/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 service list -o json
functional_test.go:1504: Took "629.80466ms" to run "out/minikube-linux-arm64 -p functional-074555 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-074555 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074555 image ls --format short --alsologtostderr:
I1201 20:58:34.697270  514730 out.go:360] Setting OutFile to fd 1 ...
I1201 20:58:34.697503  514730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:34.697531  514730 out.go:374] Setting ErrFile to fd 2...
I1201 20:58:34.697550  514730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:34.697861  514730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 20:58:34.698699  514730 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:34.701157  514730 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:34.701836  514730 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
I1201 20:58:34.726325  514730 ssh_runner.go:195] Run: systemctl --version
I1201 20:58:34.726415  514730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
I1201 20:58:34.768757  514730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
I1201 20:58:34.887767  514730 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-074555 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074555 image ls --format table --alsologtostderr:
I1201 20:58:35.421661  514938 out.go:360] Setting OutFile to fd 1 ...
I1201 20:58:35.421816  514938 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:35.421829  514938 out.go:374] Setting ErrFile to fd 2...
I1201 20:58:35.421835  514938 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:35.422144  514938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 20:58:35.422807  514938 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:35.422963  514938 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:35.423580  514938 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
I1201 20:58:35.441613  514938 ssh_runner.go:195] Run: systemctl --version
I1201 20:58:35.441678  514938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
I1201 20:58:35.462358  514938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
I1201 20:58:35.569878  514938 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-074555 image ls --format json --alsologtostderr:
[{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4a
b0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-
glibc"],"size":"3774172"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde
1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["re
gistry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"ba04bb24b95753201135cbc420b
233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074555 image ls --format json --alsologtostderr:
I1201 20:58:35.161235  514868 out.go:360] Setting OutFile to fd 1 ...
I1201 20:58:35.161426  514868 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:35.161439  514868 out.go:374] Setting ErrFile to fd 2...
I1201 20:58:35.161446  514868 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:35.161756  514868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 20:58:35.162420  514868 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:35.162588  514868 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:35.163169  514868 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
I1201 20:58:35.186994  514868 ssh_runner.go:195] Run: systemctl --version
I1201 20:58:35.187054  514868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
I1201 20:58:35.208171  514868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
I1201 20:58:35.317838  514868 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-074555 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074555 image ls --format yaml --alsologtostderr:
I1201 20:58:34.861775  514788 out.go:360] Setting OutFile to fd 1 ...
I1201 20:58:34.861958  514788 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:34.861989  514788 out.go:374] Setting ErrFile to fd 2...
I1201 20:58:34.862014  514788 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:34.862461  514788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 20:58:34.863882  514788 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:34.864373  514788 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:34.864961  514788 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
I1201 20:58:34.888641  514788 ssh_runner.go:195] Run: systemctl --version
I1201 20:58:34.888697  514788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
I1201 20:58:34.920869  514788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
I1201 20:58:35.047470  514788 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074555 ssh pgrep buildkitd: exit status 1 (411.665976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr: (3.575747727s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f9e4131e793
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-074555
--> 95d56bd98b9
Successfully tagged localhost/my-image:functional-074555
95d56bd98b94fbb1d1c186131aabed5c574a634f490af2a5e292eed5cf15140c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074555 image build -t localhost/my-image:functional-074555 testdata/build --alsologtostderr:
I1201 20:58:35.421251  514937 out.go:360] Setting OutFile to fd 1 ...
I1201 20:58:35.422036  514937 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:35.422067  514937 out.go:374] Setting ErrFile to fd 2...
I1201 20:58:35.422091  514937 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 20:58:35.422407  514937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 20:58:35.423090  514937 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:35.423941  514937 config.go:182] Loaded profile config "functional-074555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 20:58:35.424477  514937 cli_runner.go:164] Run: docker container inspect functional-074555 --format={{.State.Status}}
I1201 20:58:35.444796  514937 ssh_runner.go:195] Run: systemctl --version
I1201 20:58:35.444845  514937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074555
I1201 20:58:35.471257  514937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-074555/id_rsa Username:docker}
I1201 20:58:35.581803  514937 build_images.go:162] Building image from path: /tmp/build.636860499.tar
I1201 20:58:35.581889  514937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1201 20:58:35.592899  514937 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.636860499.tar
I1201 20:58:35.598111  514937 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.636860499.tar: stat -c "%s %y" /var/lib/minikube/build/build.636860499.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.636860499.tar': No such file or directory
I1201 20:58:35.598144  514937 ssh_runner.go:362] scp /tmp/build.636860499.tar --> /var/lib/minikube/build/build.636860499.tar (3072 bytes)
I1201 20:58:35.630276  514937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.636860499
I1201 20:58:35.640869  514937 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.636860499 -xf /var/lib/minikube/build/build.636860499.tar
I1201 20:58:35.650244  514937 crio.go:315] Building image: /var/lib/minikube/build/build.636860499
I1201 20:58:35.650313  514937 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-074555 /var/lib/minikube/build/build.636860499 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1201 20:58:38.907441  514937 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-074555 /var/lib/minikube/build/build.636860499 --cgroup-manager=cgroupfs: (3.257098001s)
I1201 20:58:38.907509  514937 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.636860499
I1201 20:58:38.915669  514937 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.636860499.tar
I1201 20:58:38.923067  514937 build_images.go:218] Built localhost/my-image:functional-074555 from /tmp/build.636860499.tar
I1201 20:58:38.923097  514937 build_images.go:134] succeeded building to: functional-074555
I1201 20:58:38.923103  514937 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-074555
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image rm kicbase/echo-server:functional-074555 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074555 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-074555
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-074555
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-074555
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-482752/.minikube/files/etc/test/nested/copy/486002/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 cache add registry.k8s.io/pause:3.1: (1.182763039s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 cache add registry.k8s.io/pause:3.3: (1.231354617s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 cache add registry.k8s.io/pause:latest: (1.182736286s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3545069999/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 cache add minikube-local-cache-test:functional-198694
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 cache delete minikube-local-cache-test:functional-198694
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-198694
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.32709ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2523603015/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2523603015/001/logs.txt: (1.038731015s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 config get cpus: exit status 14 (93.368117ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 config get cpus: exit status 14 (85.292539ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-198694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-198694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (193.057329ms)

                                                
                                                
-- stdout --
	* [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 21:27:57.659613  546280 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:27:57.659779  546280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:27:57.659791  546280 out.go:374] Setting ErrFile to fd 2...
	I1201 21:27:57.659797  546280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:27:57.660062  546280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:27:57.660445  546280 out.go:368] Setting JSON to false
	I1201 21:27:57.661305  546280 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11427,"bootTime":1764613051,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:27:57.661380  546280 start.go:143] virtualization:  
	I1201 21:27:57.665321  546280 out.go:179] * [functional-198694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1201 21:27:57.669260  546280 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:27:57.669399  546280 notify.go:221] Checking for updates...
	I1201 21:27:57.675191  546280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:27:57.678204  546280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:27:57.681109  546280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:27:57.684105  546280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:27:57.687060  546280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:27:57.690557  546280 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:27:57.691226  546280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:27:57.712454  546280 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:27:57.712594  546280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:27:57.780822  546280 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:27:57.770600597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:27:57.780951  546280 docker.go:319] overlay module found
	I1201 21:27:57.785996  546280 out.go:179] * Using the docker driver based on existing profile
	I1201 21:27:57.788945  546280 start.go:309] selected driver: docker
	I1201 21:27:57.788965  546280 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:27:57.789090  546280 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:27:57.792739  546280 out.go:203] 
	W1201 21:27:57.795548  546280 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1201 21:27:57.798446  546280 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-198694 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-198694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-198694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (240.150305ms)

                                                
                                                
-- stdout --
	* [functional-198694] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 21:27:58.101372  546398 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:27:58.101548  546398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:27:58.101555  546398 out.go:374] Setting ErrFile to fd 2...
	I1201 21:27:58.101561  546398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:27:58.101999  546398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:27:58.102413  546398 out.go:368] Setting JSON to false
	I1201 21:27:58.103413  546398 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11428,"bootTime":1764613051,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1201 21:27:58.103489  546398 start.go:143] virtualization:  
	I1201 21:27:58.106728  546398 out.go:179] * [functional-198694] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1201 21:27:58.110431  546398 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 21:27:58.110606  546398 notify.go:221] Checking for updates...
	I1201 21:27:58.116551  546398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 21:27:58.119475  546398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	I1201 21:27:58.122388  546398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	I1201 21:27:58.125369  546398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1201 21:27:58.128308  546398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 21:27:58.131852  546398 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 21:27:58.132449  546398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 21:27:58.170277  546398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1201 21:27:58.170455  546398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:27:58.267551  546398 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-01 21:27:58.257057975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:27:58.267677  546398 docker.go:319] overlay module found
	I1201 21:27:58.271024  546398 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1201 21:27:58.274017  546398 start.go:309] selected driver: docker
	I1201 21:27:58.274047  546398 start.go:927] validating driver "docker" against &{Name:functional-198694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-198694 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 21:27:58.274175  546398 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 21:27:58.277846  546398 out.go:203] 
	W1201 21:27:58.280883  546398 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1201 21:27:58.283947  546398 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh -n functional-198694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 cp functional-198694:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1037925037/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh -n functional-198694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh -n functional-198694 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/486002/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo cat /etc/test/nested/copy/486002/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/486002.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo cat /etc/ssl/certs/486002.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/486002.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo cat /usr/share/ca-certificates/486002.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4860022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo cat /etc/ssl/certs/4860022.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4860022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo cat /usr/share/ca-certificates/4860022.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 ssh "sudo systemctl is-active docker": exit status 1 (382.817155ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 ssh "sudo systemctl is-active containerd": exit status 1 (369.151713ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-198694 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-198694 image ls --format short --alsologtostderr:
I1201 21:28:01.476725  547052 out.go:360] Setting OutFile to fd 1 ...
I1201 21:28:01.476927  547052 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:01.476960  547052 out.go:374] Setting ErrFile to fd 2...
I1201 21:28:01.476982  547052 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:01.477266  547052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 21:28:01.477921  547052 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:01.478104  547052 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:01.478699  547052 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
I1201 21:28:01.496416  547052 ssh_runner.go:195] Run: systemctl --version
I1201 21:28:01.496479  547052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
I1201 21:28:01.521206  547052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
I1201 21:28:01.630608  547052 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-198694 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0    │ ccd634d9bcc36 │ 84.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0    │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0    │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.1               │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest            │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ latest            │ 71a676dd070f4 │ 1.63MB │
│ localhost/my-image                      │ functional-198694 │ 7f0efee0cc68f │ 1.64MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0    │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/pause                   │ 3.10.1            │ d7b100cd9a77b │ 517kB  │
│ registry.k8s.io/pause                   │ 3.3               │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 66749159455b3 │ 29MB   │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-198694 image ls --format table --alsologtostderr:
I1201 21:28:05.879766  547545 out.go:360] Setting OutFile to fd 1 ...
I1201 21:28:05.879947  547545 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:05.879979  547545 out.go:374] Setting ErrFile to fd 2...
I1201 21:28:05.880000  547545 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:05.880304  547545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 21:28:05.881015  547545 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:05.881194  547545 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:05.881740  547545 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
I1201 21:28:05.899688  547545 ssh_runner.go:195] Run: systemctl --version
I1201 21:28:05.899746  547545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
I1201 21:28:05.919916  547545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
I1201 21:28:06.026672  547545 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-198694 image ls --format json --alsologtostderr:
[{"id":"0b88b158e601ef3e9579fa9ece57e2dca283cc9a2bfdea739b9f98353dedfb03","repoDigests":["docker.io/library/109dfde1288b2b1732f77e0ba3821b2555a5c01af14a659ddb99c2f84071be1f-tmp@sha256:1ddd8d069f321e16d932268d08ae89094340696b415b7f2b0f442a7d4a3d2ec3"],"repoTags":[],"size":"1638179"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29035622"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["reg
istry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74488375"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72167568"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74105124"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"517328"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b
1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"7f0efee0cc68fb39bd9cc45fcc38234bec627d1dd4c233b213d85763a5342024","repoDigests":["localhost/my-image@sha256:47d048c9bcf1ec9ad71d0376cbae35e35ed01471078dc3e01fadea322d25497d"],"repoTags":["localhost/my-image:functional-198694"],"size":"1640791"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60854229"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84947242"},{"id":"16378741539f1be9c6e347d127537d379a6592587
b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49819792"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-198694 image ls --format json --alsologtostderr:
I1201 21:28:05.648489  547504 out.go:360] Setting OutFile to fd 1 ...
I1201 21:28:05.648724  547504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:05.648753  547504 out.go:374] Setting ErrFile to fd 2...
I1201 21:28:05.648774  547504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:05.649076  547504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 21:28:05.649752  547504 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:05.649936  547504 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:05.650561  547504 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
I1201 21:28:05.670613  547504 ssh_runner.go:195] Run: systemctl --version
I1201 21:28:05.670666  547504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
I1201 21:28:05.688229  547504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
I1201 21:28:05.789653  547504 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-198694 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74488375"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84947242"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72167568"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49819792"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde
repoTags:
- registry.k8s.io/pause:3.10.1
size: "517328"
- id: 66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29035622"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60854229"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74105124"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-198694 image ls --format yaml --alsologtostderr:
I1201 21:28:01.721095  547089 out.go:360] Setting OutFile to fd 1 ...
I1201 21:28:01.721310  547089 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:01.721338  547089 out.go:374] Setting ErrFile to fd 2...
I1201 21:28:01.721357  547089 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:01.721643  547089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 21:28:01.722336  547089 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:01.722520  547089 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:01.723115  547089 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
I1201 21:28:01.741023  547089 ssh_runner.go:195] Run: systemctl --version
I1201 21:28:01.741079  547089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
I1201 21:28:01.758538  547089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
I1201 21:28:01.862151  547089 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 ssh pgrep buildkitd: exit status 1 (287.878591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image build -t localhost/my-image:functional-198694 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-198694 image build -t localhost/my-image:functional-198694 testdata/build --alsologtostderr: (3.15239319s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-198694 image build -t localhost/my-image:functional-198694 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0b88b158e60
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-198694
--> 7f0efee0cc6
Successfully tagged localhost/my-image:functional-198694
7f0efee0cc68fb39bd9cc45fcc38234bec627d1dd4c233b213d85763a5342024
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-198694 image build -t localhost/my-image:functional-198694 testdata/build --alsologtostderr:
I1201 21:28:02.239826  547193 out.go:360] Setting OutFile to fd 1 ...
I1201 21:28:02.240039  547193 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:02.240073  547193 out.go:374] Setting ErrFile to fd 2...
I1201 21:28:02.240097  547193 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 21:28:02.240381  547193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
I1201 21:28:02.241073  547193 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:02.241873  547193 config.go:182] Loaded profile config "functional-198694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 21:28:02.242433  547193 cli_runner.go:164] Run: docker container inspect functional-198694 --format={{.State.Status}}
I1201 21:28:02.260673  547193 ssh_runner.go:195] Run: systemctl --version
I1201 21:28:02.260745  547193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-198694
I1201 21:28:02.278387  547193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/functional-198694/id_rsa Username:docker}
I1201 21:28:02.382662  547193 build_images.go:162] Building image from path: /tmp/build.2695107421.tar
I1201 21:28:02.382739  547193 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1201 21:28:02.391390  547193 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2695107421.tar
I1201 21:28:02.395337  547193 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2695107421.tar: stat -c "%s %y" /var/lib/minikube/build/build.2695107421.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2695107421.tar': No such file or directory
I1201 21:28:02.395372  547193 ssh_runner.go:362] scp /tmp/build.2695107421.tar --> /var/lib/minikube/build/build.2695107421.tar (3072 bytes)
I1201 21:28:02.413899  547193 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2695107421
I1201 21:28:02.422096  547193 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2695107421 -xf /var/lib/minikube/build/build.2695107421.tar
I1201 21:28:02.430367  547193 crio.go:315] Building image: /var/lib/minikube/build/build.2695107421
I1201 21:28:02.430444  547193 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-198694 /var/lib/minikube/build/build.2695107421 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1201 21:28:05.309170  547193 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-198694 /var/lib/minikube/build/build.2695107421 --cgroup-manager=cgroupfs: (2.87868975s)
I1201 21:28:05.309258  547193 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2695107421
I1201 21:28:05.317810  547193 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2695107421.tar
I1201 21:28:05.326459  547193 build_images.go:218] Built localhost/my-image:functional-198694 from /tmp/build.2695107421.tar
I1201 21:28:05.326492  547193 build_images.go:134] succeeded building to: functional-198694
I1201 21:28:05.326498  547193 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-198694
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image rm kicbase/echo-server:functional-198694 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-198694 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "345.502829ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.825608ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "326.08772ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "64.072925ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1645726952/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (413.205313ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 21:27:54.699536  486002 retry.go:31] will retry after 509.400773ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1645726952/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-198694 ssh "sudo umount -f /mount-9p": exit status 1 (304.573826ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-198694 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1645726952/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-198694 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-198694 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-198694 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1388782895/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-198694
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-198694
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-198694
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1201 21:30:34.916426  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:02.451302  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:02.458070  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:02.469460  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:02.490835  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:02.532207  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:02.613595  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:02.775186  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:03.097027  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:03.739025  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:05.020505  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:07.583274  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:12.705517  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:22.947206  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:31:43.429122  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:32:24.391274  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:32:52.876522  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m28.771142083s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (209.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 kubectl -- rollout status deployment/busybox: (3.740103432s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-9dcnf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-zj9wh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-znkzv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-9dcnf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-zj9wh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-znkzv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-9dcnf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-zj9wh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-znkzv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-9dcnf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-9dcnf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-zj9wh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1201 21:33:46.313018  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-zj9wh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-znkzv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 kubectl -- exec busybox-7b57f96db7-znkzv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 node add --alsologtostderr -v 5: (58.893937981s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5: (1.101807564s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-783413 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.134583658s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 status --output json --alsologtostderr -v 5: (1.062794877s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp testdata/cp-test.txt ha-783413:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile79586140/001/cp-test_ha-783413.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413:/home/docker/cp-test.txt ha-783413-m02:/home/docker/cp-test_ha-783413_ha-783413-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m02 "sudo cat /home/docker/cp-test_ha-783413_ha-783413-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413:/home/docker/cp-test.txt ha-783413-m03:/home/docker/cp-test_ha-783413_ha-783413-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m03 "sudo cat /home/docker/cp-test_ha-783413_ha-783413-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413:/home/docker/cp-test.txt ha-783413-m04:/home/docker/cp-test_ha-783413_ha-783413-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m04 "sudo cat /home/docker/cp-test_ha-783413_ha-783413-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp testdata/cp-test.txt ha-783413-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile79586140/001/cp-test_ha-783413-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m02:/home/docker/cp-test.txt ha-783413:/home/docker/cp-test_ha-783413-m02_ha-783413.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413 "sudo cat /home/docker/cp-test_ha-783413-m02_ha-783413.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m02:/home/docker/cp-test.txt ha-783413-m03:/home/docker/cp-test_ha-783413-m02_ha-783413-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m03 "sudo cat /home/docker/cp-test_ha-783413-m02_ha-783413-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m02:/home/docker/cp-test.txt ha-783413-m04:/home/docker/cp-test_ha-783413-m02_ha-783413-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m04 "sudo cat /home/docker/cp-test_ha-783413-m02_ha-783413-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp testdata/cp-test.txt ha-783413-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile79586140/001/cp-test_ha-783413-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m03:/home/docker/cp-test.txt ha-783413:/home/docker/cp-test_ha-783413-m03_ha-783413.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413 "sudo cat /home/docker/cp-test_ha-783413-m03_ha-783413.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m03:/home/docker/cp-test.txt ha-783413-m02:/home/docker/cp-test_ha-783413-m03_ha-783413-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m02 "sudo cat /home/docker/cp-test_ha-783413-m03_ha-783413-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m03:/home/docker/cp-test.txt ha-783413-m04:/home/docker/cp-test_ha-783413-m03_ha-783413-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m04 "sudo cat /home/docker/cp-test_ha-783413-m03_ha-783413-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp testdata/cp-test.txt ha-783413-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile79586140/001/cp-test_ha-783413-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m04:/home/docker/cp-test.txt ha-783413:/home/docker/cp-test_ha-783413-m04_ha-783413.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413 "sudo cat /home/docker/cp-test_ha-783413-m04_ha-783413.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m04:/home/docker/cp-test.txt ha-783413-m02:/home/docker/cp-test_ha-783413-m04_ha-783413-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m02 "sudo cat /home/docker/cp-test_ha-783413-m04_ha-783413-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 cp ha-783413-m04:/home/docker/cp-test.txt ha-783413-m03:/home/docker/cp-test_ha-783413-m04_ha-783413-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 ssh -n ha-783413-m03 "sudo cat /home/docker/cp-test_ha-783413-m04_ha-783413-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 node stop m02 --alsologtostderr -v 5
E1201 21:35:17.989797  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 node stop m02 --alsologtostderr -v 5: (12.158620656s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5: exit status 7 (854.195672ms)

                                                
                                                
-- stdout --
	ha-783413
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-783413-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-783413-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-783413-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 21:35:21.671942  563295 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:35:21.672125  563295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:35:21.672138  563295 out.go:374] Setting ErrFile to fd 2...
	I1201 21:35:21.672144  563295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:35:21.672399  563295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:35:21.672586  563295 out.go:368] Setting JSON to false
	I1201 21:35:21.672627  563295 mustload.go:66] Loading cluster: ha-783413
	I1201 21:35:21.672699  563295 notify.go:221] Checking for updates...
	I1201 21:35:21.673602  563295 config.go:182] Loaded profile config "ha-783413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 21:35:21.673625  563295 status.go:174] checking status of ha-783413 ...
	I1201 21:35:21.674208  563295 cli_runner.go:164] Run: docker container inspect ha-783413 --format={{.State.Status}}
	I1201 21:35:21.698256  563295 status.go:371] ha-783413 host status = "Running" (err=<nil>)
	I1201 21:35:21.698279  563295 host.go:66] Checking if "ha-783413" exists ...
	I1201 21:35:21.698577  563295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-783413
	I1201 21:35:21.739898  563295 host.go:66] Checking if "ha-783413" exists ...
	I1201 21:35:21.740289  563295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:35:21.740407  563295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-783413
	I1201 21:35:21.761390  563295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/ha-783413/id_rsa Username:docker}
	I1201 21:35:21.868666  563295 ssh_runner.go:195] Run: systemctl --version
	I1201 21:35:21.875797  563295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:35:21.890066  563295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:35:21.977103  563295 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-01 21:35:21.965749375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:35:21.977696  563295 kubeconfig.go:125] found "ha-783413" server: "https://192.168.49.254:8443"
	I1201 21:35:21.977731  563295 api_server.go:166] Checking apiserver status ...
	I1201 21:35:21.977788  563295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:35:21.991811  563295 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup
	I1201 21:35:22.004129  563295 api_server.go:182] apiserver freezer: "13:freezer:/docker/ceb0d180b35b075a3c3075b2fa094d457425e49259bac0fcf3a0f965e9d9f1ff/crio/crio-f8f5f92953bab246429164478bc76b3b4e4a4ade4c1aa7df201057bf4b8d449c"
	I1201 21:35:22.004218  563295 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ceb0d180b35b075a3c3075b2fa094d457425e49259bac0fcf3a0f965e9d9f1ff/crio/crio-f8f5f92953bab246429164478bc76b3b4e4a4ade4c1aa7df201057bf4b8d449c/freezer.state
	I1201 21:35:22.013788  563295 api_server.go:204] freezer state: "THAWED"
	I1201 21:35:22.013819  563295 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1201 21:35:22.022476  563295 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1201 21:35:22.022583  563295 status.go:463] ha-783413 apiserver status = Running (err=<nil>)
	I1201 21:35:22.022600  563295 status.go:176] ha-783413 status: &{Name:ha-783413 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 21:35:22.022618  563295 status.go:174] checking status of ha-783413-m02 ...
	I1201 21:35:22.022991  563295 cli_runner.go:164] Run: docker container inspect ha-783413-m02 --format={{.State.Status}}
	I1201 21:35:22.043461  563295 status.go:371] ha-783413-m02 host status = "Stopped" (err=<nil>)
	I1201 21:35:22.043487  563295 status.go:384] host is not running, skipping remaining checks
	I1201 21:35:22.043495  563295 status.go:176] ha-783413-m02 status: &{Name:ha-783413-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 21:35:22.043531  563295 status.go:174] checking status of ha-783413-m03 ...
	I1201 21:35:22.043860  563295 cli_runner.go:164] Run: docker container inspect ha-783413-m03 --format={{.State.Status}}
	I1201 21:35:22.063735  563295 status.go:371] ha-783413-m03 host status = "Running" (err=<nil>)
	I1201 21:35:22.063762  563295 host.go:66] Checking if "ha-783413-m03" exists ...
	I1201 21:35:22.064112  563295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-783413-m03
	I1201 21:35:22.085778  563295 host.go:66] Checking if "ha-783413-m03" exists ...
	I1201 21:35:22.086150  563295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:35:22.086200  563295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-783413-m03
	I1201 21:35:22.105930  563295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/ha-783413-m03/id_rsa Username:docker}
	I1201 21:35:22.213740  563295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:35:22.229675  563295 kubeconfig.go:125] found "ha-783413" server: "https://192.168.49.254:8443"
	I1201 21:35:22.229707  563295 api_server.go:166] Checking apiserver status ...
	I1201 21:35:22.229752  563295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:35:22.242051  563295 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	I1201 21:35:22.250981  563295 api_server.go:182] apiserver freezer: "13:freezer:/docker/3f50fba284faa51eadb381cd53c9e229c133d19303377aec4757216829625c05/crio/crio-39d4b378c5c6bb6097ffca965254eb3b56632dafcf8cd4a461b475b95ba8f9ea"
	I1201 21:35:22.251085  563295 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3f50fba284faa51eadb381cd53c9e229c133d19303377aec4757216829625c05/crio/crio-39d4b378c5c6bb6097ffca965254eb3b56632dafcf8cd4a461b475b95ba8f9ea/freezer.state
	I1201 21:35:22.259092  563295 api_server.go:204] freezer state: "THAWED"
	I1201 21:35:22.259126  563295 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1201 21:35:22.267761  563295 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1201 21:35:22.267790  563295 status.go:463] ha-783413-m03 apiserver status = Running (err=<nil>)
	I1201 21:35:22.267800  563295 status.go:176] ha-783413-m03 status: &{Name:ha-783413-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 21:35:22.267825  563295 status.go:174] checking status of ha-783413-m04 ...
	I1201 21:35:22.268141  563295 cli_runner.go:164] Run: docker container inspect ha-783413-m04 --format={{.State.Status}}
	I1201 21:35:22.287895  563295 status.go:371] ha-783413-m04 host status = "Running" (err=<nil>)
	I1201 21:35:22.287919  563295 host.go:66] Checking if "ha-783413-m04" exists ...
	I1201 21:35:22.288236  563295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-783413-m04
	I1201 21:35:22.306320  563295 host.go:66] Checking if "ha-783413-m04" exists ...
	I1201 21:35:22.306731  563295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:35:22.306782  563295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-783413-m04
	I1201 21:35:22.325430  563295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/ha-783413-m04/id_rsa Username:docker}
	I1201 21:35:22.438991  563295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:35:22.459706  563295 status.go:176] ha-783413-m04 status: &{Name:ha-783413-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (111.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 node start m02 --alsologtostderr -v 5
E1201 21:35:34.916415  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:35:55.947308  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:36:02.455346  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:36:30.158709  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 node start m02 --alsologtostderr -v 5: (1m49.516871862s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5: (1.40946783s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (111.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.202598707s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 stop --alsologtostderr -v 5
E1201 21:37:52.876289  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 stop --alsologtostderr -v 5: (37.583709176s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 start --wait true --alsologtostderr -v 5: (1m23.619111533s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 node delete m03 --alsologtostderr -v 5: (9.634627331s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 stop --alsologtostderr -v 5: (36.093208279s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5: exit status 7 (112.233731ms)

                                                
                                                
-- stdout --
	ha-783413
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-783413-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-783413-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 21:40:04.444573  575059 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:40:04.444771  575059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:40:04.444798  575059 out.go:374] Setting ErrFile to fd 2...
	I1201 21:40:04.444819  575059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:40:04.445104  575059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:40:04.445322  575059 out.go:368] Setting JSON to false
	I1201 21:40:04.445379  575059 mustload.go:66] Loading cluster: ha-783413
	I1201 21:40:04.445458  575059 notify.go:221] Checking for updates...
	I1201 21:40:04.446486  575059 config.go:182] Loaded profile config "ha-783413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 21:40:04.446542  575059 status.go:174] checking status of ha-783413 ...
	I1201 21:40:04.447235  575059 cli_runner.go:164] Run: docker container inspect ha-783413 --format={{.State.Status}}
	I1201 21:40:04.464694  575059 status.go:371] ha-783413 host status = "Stopped" (err=<nil>)
	I1201 21:40:04.464715  575059 status.go:384] host is not running, skipping remaining checks
	I1201 21:40:04.464722  575059 status.go:176] ha-783413 status: &{Name:ha-783413 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 21:40:04.464749  575059 status.go:174] checking status of ha-783413-m02 ...
	I1201 21:40:04.465060  575059 cli_runner.go:164] Run: docker container inspect ha-783413-m02 --format={{.State.Status}}
	I1201 21:40:04.484042  575059 status.go:371] ha-783413-m02 host status = "Stopped" (err=<nil>)
	I1201 21:40:04.484062  575059 status.go:384] host is not running, skipping remaining checks
	I1201 21:40:04.484069  575059 status.go:176] ha-783413-m02 status: &{Name:ha-783413-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 21:40:04.484100  575059 status.go:174] checking status of ha-783413-m04 ...
	I1201 21:40:04.484392  575059 cli_runner.go:164] Run: docker container inspect ha-783413-m04 --format={{.State.Status}}
	I1201 21:40:04.506866  575059 status.go:371] ha-783413-m04 host status = "Stopped" (err=<nil>)
	I1201 21:40:04.506888  575059 status.go:384] host is not running, skipping remaining checks
	I1201 21:40:04.506894  575059 status.go:176] ha-783413-m04 status: &{Name:ha-783413-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1201 21:40:34.916138  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:41:02.450371  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m6.469610342s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (93.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 node add --control-plane --alsologtostderr -v 5: (1m31.791636181s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-783413 status --alsologtostderr -v 5: (1.382287673s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (93.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.093892098s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-669097 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-669097 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m18.477353909s)
--- PASS: TestJSONOutput/start/Command (78.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-669097 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-669097 --output=json --user=testUser: (5.866417158s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-218123 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-218123 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (93.154379ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f009d34b-f50e-4162-9b28-87f991c113c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-218123] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c78de29-d8b0-4c26-836d-5c0fdd399371","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"dd9be91d-0640-47d6-877c-2b66dad54b24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f406eb3-2f1c-4aad-9d2f-4c710214b57d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig"}}
	{"specversion":"1.0","id":"d6a6fea1-1b8e-4d43-9b76-7b85e0b9cc83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube"}}
	{"specversion":"1.0","id":"26e60681-4fd4-40f4-ad46-d1312b9f537a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8a58445d-79df-471f-90c1-1e6f13e889a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6a0b9393-cd32-47a8-864b-19b79c0e960d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-218123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-218123
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (63.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-542228 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-542228 --network=: (1m0.760958864s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-542228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-542228
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-542228: (2.277161178s)
--- PASS: TestKicCustomNetwork/create_custom_network (63.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-289253 --network=bridge
E1201 21:45:34.915898  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:46:02.455296  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-289253 --network=bridge: (33.536829217s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-289253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-289253
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-289253: (2.102165025s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.66s)

                                                
                                    
x
+
TestKicExistingNetwork (38.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1201 21:46:07.838305  486002 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1201 21:46:07.854673  486002 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1201 21:46:07.856649  486002 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1201 21:46:07.856708  486002 cli_runner.go:164] Run: docker network inspect existing-network
W1201 21:46:07.873205  486002 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1201 21:46:07.873236  486002 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1201 21:46:07.873254  486002 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1201 21:46:07.873370  486002 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1201 21:46:07.891919  486002 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8bc0bedecc32 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7e:01:66:4f:e0:c6} reservation:<nil>}
I1201 21:46:07.892277  486002 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e2aed0}
I1201 21:46:07.892306  486002 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1201 21:46:07.892364  486002 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1201 21:46:07.963632  486002 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-666978 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-666978 --network=existing-network: (35.699734982s)
helpers_test.go:175: Cleaning up "existing-network-666978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-666978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-666978: (2.283495434s)
I1201 21:46:45.964845  486002 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.15s)

                                                
                                    
x
+
TestKicCustomSubnet (35.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-397223 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-397223 --subnet=192.168.60.0/24: (32.857547907s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-397223 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-397223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-397223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-397223: (2.261449255s)
--- PASS: TestKicCustomSubnet (35.15s)

                                                
                                    
x
+
TestKicStaticIP (37.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-999017 --static-ip=192.168.200.200
E1201 21:47:25.520086  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:47:52.876696  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-999017 --static-ip=192.168.200.200: (35.149136349s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-999017 ip
helpers_test.go:175: Cleaning up "static-ip-999017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-999017
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-999017: (2.227735124s)
--- PASS: TestKicStaticIP (37.55s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-652849 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-652849 --driver=docker  --container-runtime=crio: (32.812035565s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-655433 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-655433 --driver=docker  --container-runtime=crio: (30.060889249s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-652849
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-655433
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-655433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-655433
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-655433: (2.114348656s)
helpers_test.go:175: Cleaning up "first-652849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-652849
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-652849: (2.083639962s)
--- PASS: TestMinikubeProfile (68.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-591865 --memory=3072 --mount-string /tmp/TestMountStartserial2639098612/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-591865 --memory=3072 --mount-string /tmp/TestMountStartserial2639098612/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.392800399s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-591865 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-593620 --memory=3072 --mount-string /tmp/TestMountStartserial2639098612/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-593620 --memory=3072 --mount-string /tmp/TestMountStartserial2639098612/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.925239551s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-593620 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-591865 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-591865 --alsologtostderr -v=5: (1.737677477s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-593620 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-593620
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-593620: (1.297269789s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-593620
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-593620: (7.223395437s)
--- PASS: TestMountStart/serial/RestartStopped (8.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-593620 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-267103 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1201 21:50:34.916437  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:51:02.450474  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-267103 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.933437731s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- rollout status deployment/busybox
E1201 21:51:57.991474  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-267103 -- rollout status deployment/busybox: (4.042831117s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-8ptkc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-dx6cf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-8ptkc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-dx6cf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-8ptkc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-dx6cf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-8ptkc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-8ptkc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-dx6cf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-267103 -- exec busybox-7b57f96db7-dx6cf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-267103 -v=5 --alsologtostderr
E1201 21:52:35.948772  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:52:52.876622  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-267103 -v=5 --alsologtostderr: (57.546368981s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-267103 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp testdata/cp-test.txt multinode-267103:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp multinode-267103:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1286892911/001/cp-test_multinode-267103.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp multinode-267103:/home/docker/cp-test.txt multinode-267103-m02:/home/docker/cp-test_multinode-267103_multinode-267103-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m02 "sudo cat /home/docker/cp-test_multinode-267103_multinode-267103-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp multinode-267103:/home/docker/cp-test.txt multinode-267103-m03:/home/docker/cp-test_multinode-267103_multinode-267103-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m03 "sudo cat /home/docker/cp-test_multinode-267103_multinode-267103-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp testdata/cp-test.txt multinode-267103-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp multinode-267103-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1286892911/001/cp-test_multinode-267103-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp multinode-267103-m02:/home/docker/cp-test.txt multinode-267103:/home/docker/cp-test_multinode-267103-m02_multinode-267103.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103 "sudo cat /home/docker/cp-test_multinode-267103-m02_multinode-267103.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp multinode-267103-m02:/home/docker/cp-test.txt multinode-267103-m03:/home/docker/cp-test_multinode-267103-m02_multinode-267103-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m03 "sudo cat /home/docker/cp-test_multinode-267103-m02_multinode-267103-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp testdata/cp-test.txt multinode-267103-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp multinode-267103-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1286892911/001/cp-test_multinode-267103-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp multinode-267103-m03:/home/docker/cp-test.txt multinode-267103:/home/docker/cp-test_multinode-267103-m03_multinode-267103.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103 "sudo cat /home/docker/cp-test_multinode-267103-m03_multinode-267103.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 cp multinode-267103-m03:/home/docker/cp-test.txt multinode-267103-m02:/home/docker/cp-test_multinode-267103-m03_multinode-267103-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 ssh -n multinode-267103-m02 "sudo cat /home/docker/cp-test_multinode-267103-m03_multinode-267103-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-267103 node stop m03: (1.319757387s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-267103 status: exit status 7 (597.864992ms)

                                                
                                                
-- stdout --
	multinode-267103
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-267103-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-267103-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-267103 status --alsologtostderr: exit status 7 (565.851648ms)

                                                
                                                
-- stdout --
	multinode-267103
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-267103-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-267103-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 21:53:15.262303  625372 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:53:15.262477  625372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:53:15.262490  625372 out.go:374] Setting ErrFile to fd 2...
	I1201 21:53:15.262497  625372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:53:15.262784  625372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:53:15.262995  625372 out.go:368] Setting JSON to false
	I1201 21:53:15.263041  625372 mustload.go:66] Loading cluster: multinode-267103
	I1201 21:53:15.263103  625372 notify.go:221] Checking for updates...
	I1201 21:53:15.264472  625372 config.go:182] Loaded profile config "multinode-267103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 21:53:15.264503  625372 status.go:174] checking status of multinode-267103 ...
	I1201 21:53:15.265237  625372 cli_runner.go:164] Run: docker container inspect multinode-267103 --format={{.State.Status}}
	I1201 21:53:15.288045  625372 status.go:371] multinode-267103 host status = "Running" (err=<nil>)
	I1201 21:53:15.288069  625372 host.go:66] Checking if "multinode-267103" exists ...
	I1201 21:53:15.288363  625372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-267103
	I1201 21:53:15.318003  625372 host.go:66] Checking if "multinode-267103" exists ...
	I1201 21:53:15.318442  625372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:53:15.318494  625372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-267103
	I1201 21:53:15.338645  625372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33307 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/multinode-267103/id_rsa Username:docker}
	I1201 21:53:15.445708  625372 ssh_runner.go:195] Run: systemctl --version
	I1201 21:53:15.452933  625372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:53:15.466598  625372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 21:53:15.532719  625372 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-01 21:53:15.521552361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1201 21:53:15.533322  625372 kubeconfig.go:125] found "multinode-267103" server: "https://192.168.67.2:8443"
	I1201 21:53:15.533364  625372 api_server.go:166] Checking apiserver status ...
	I1201 21:53:15.533412  625372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 21:53:15.545709  625372 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	I1201 21:53:15.554617  625372 api_server.go:182] apiserver freezer: "13:freezer:/docker/d5a27f5dd9803b2dd6569b19cba193163765d87232748dc453f806bb9c54c9d4/crio/crio-5c8ce00deb3e4164024403ca447c81f8ffabcf6c781536687d847958e8a9f1cd"
	I1201 21:53:15.554685  625372 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d5a27f5dd9803b2dd6569b19cba193163765d87232748dc453f806bb9c54c9d4/crio/crio-5c8ce00deb3e4164024403ca447c81f8ffabcf6c781536687d847958e8a9f1cd/freezer.state
	I1201 21:53:15.562982  625372 api_server.go:204] freezer state: "THAWED"
	I1201 21:53:15.563010  625372 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1201 21:53:15.571538  625372 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1201 21:53:15.571568  625372 status.go:463] multinode-267103 apiserver status = Running (err=<nil>)
	I1201 21:53:15.571579  625372 status.go:176] multinode-267103 status: &{Name:multinode-267103 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 21:53:15.571625  625372 status.go:174] checking status of multinode-267103-m02 ...
	I1201 21:53:15.571948  625372 cli_runner.go:164] Run: docker container inspect multinode-267103-m02 --format={{.State.Status}}
	I1201 21:53:15.591887  625372 status.go:371] multinode-267103-m02 host status = "Running" (err=<nil>)
	I1201 21:53:15.591915  625372 host.go:66] Checking if "multinode-267103-m02" exists ...
	I1201 21:53:15.592231  625372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-267103-m02
	I1201 21:53:15.610815  625372 host.go:66] Checking if "multinode-267103-m02" exists ...
	I1201 21:53:15.611163  625372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 21:53:15.611227  625372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-267103-m02
	I1201 21:53:15.629826  625372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33312 SSHKeyPath:/home/jenkins/minikube-integration/21997-482752/.minikube/machines/multinode-267103-m02/id_rsa Username:docker}
	I1201 21:53:15.736512  625372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 21:53:15.749677  625372 status.go:176] multinode-267103-m02 status: &{Name:multinode-267103-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1201 21:53:15.749766  625372 status.go:174] checking status of multinode-267103-m03 ...
	I1201 21:53:15.750151  625372 cli_runner.go:164] Run: docker container inspect multinode-267103-m03 --format={{.State.Status}}
	I1201 21:53:15.767687  625372 status.go:371] multinode-267103-m03 host status = "Stopped" (err=<nil>)
	I1201 21:53:15.767713  625372 status.go:384] host is not running, skipping remaining checks
	I1201 21:53:15.767720  625372 status.go:176] multinode-267103-m03 status: &{Name:multinode-267103-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.48s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-267103 node start m03 -v=5 --alsologtostderr: (7.442361159s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-267103
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-267103
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-267103: (25.150092246s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-267103 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-267103 --wait=true -v=5 --alsologtostderr: (50.024514212s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-267103
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-267103 node delete m03: (5.028078196s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-267103 stop: (23.90755451s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-267103 status: exit status 7 (103.734432ms)

                                                
                                                
-- stdout --
	multinode-267103
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-267103-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-267103 status --alsologtostderr: exit status 7 (103.239722ms)

                                                
                                                
-- stdout --
	multinode-267103
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-267103-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 21:55:09.260899  633193 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:55:09.261018  633193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:55:09.261030  633193 out.go:374] Setting ErrFile to fd 2...
	I1201 21:55:09.261036  633193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:55:09.261392  633193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:55:09.261610  633193 out.go:368] Setting JSON to false
	I1201 21:55:09.261638  633193 mustload.go:66] Loading cluster: multinode-267103
	I1201 21:55:09.262325  633193 config.go:182] Loaded profile config "multinode-267103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 21:55:09.262342  633193 status.go:174] checking status of multinode-267103 ...
	I1201 21:55:09.263063  633193 cli_runner.go:164] Run: docker container inspect multinode-267103 --format={{.State.Status}}
	I1201 21:55:09.263415  633193 notify.go:221] Checking for updates...
	I1201 21:55:09.279560  633193 status.go:371] multinode-267103 host status = "Stopped" (err=<nil>)
	I1201 21:55:09.279580  633193 status.go:384] host is not running, skipping remaining checks
	I1201 21:55:09.279587  633193 status.go:176] multinode-267103 status: &{Name:multinode-267103 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 21:55:09.279617  633193 status.go:174] checking status of multinode-267103-m02 ...
	I1201 21:55:09.279915  633193 cli_runner.go:164] Run: docker container inspect multinode-267103-m02 --format={{.State.Status}}
	I1201 21:55:09.311424  633193 status.go:371] multinode-267103-m02 host status = "Stopped" (err=<nil>)
	I1201 21:55:09.311445  633193 status.go:384] host is not running, skipping remaining checks
	I1201 21:55:09.311463  633193 status.go:176] multinode-267103-m02 status: &{Name:multinode-267103-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-267103 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1201 21:55:34.916831  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 21:56:02.450590  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-267103 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (53.59299549s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-267103 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-267103
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-267103-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-267103-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.497482ms)

                                                
                                                
-- stdout --
	* [multinode-267103-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-267103-m02' is duplicated with machine name 'multinode-267103-m02' in profile 'multinode-267103'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-267103-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-267103-m03 --driver=docker  --container-runtime=crio: (33.064173637s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-267103
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-267103: exit status 80 (353.52461ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-267103 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-267103-m03 already exists in multinode-267103-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-267103-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-267103-m03: (2.160560636s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.73s)

                                                
                                    
x
+
TestPreload (122.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-768534 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-768534 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m3.424329855s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-768534 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-768534 image pull gcr.io/k8s-minikube/busybox: (2.385152801s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-768534
E1201 21:57:52.876949  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-768534: (5.93947222s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-768534 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-768534 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (47.526942973s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-768534 image list
helpers_test.go:175: Cleaning up "test-preload-768534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-768534
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-768534: (2.604445659s)
--- PASS: TestPreload (122.14s)

                                                
                                    
x
+
TestScheduledStopUnix (107.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-566549 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-566549 --memory=3072 --driver=docker  --container-runtime=crio: (30.549694468s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-566549 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1201 21:59:16.440305  647212 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:59:16.440563  647212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:59:16.440593  647212 out.go:374] Setting ErrFile to fd 2...
	I1201 21:59:16.440629  647212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:59:16.441004  647212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:59:16.441472  647212 out.go:368] Setting JSON to false
	I1201 21:59:16.441656  647212 mustload.go:66] Loading cluster: scheduled-stop-566549
	I1201 21:59:16.442155  647212 config.go:182] Loaded profile config "scheduled-stop-566549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 21:59:16.442278  647212 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/config.json ...
	I1201 21:59:16.442521  647212 mustload.go:66] Loading cluster: scheduled-stop-566549
	I1201 21:59:16.442694  647212 config.go:182] Loaded profile config "scheduled-stop-566549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-566549 -n scheduled-stop-566549
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-566549 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1201 21:59:16.902583  647301 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:59:16.903948  647301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:59:16.903957  647301 out.go:374] Setting ErrFile to fd 2...
	I1201 21:59:16.903962  647301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:59:16.904647  647301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:59:16.909838  647301 out.go:368] Setting JSON to false
	I1201 21:59:16.910896  647301 daemonize_unix.go:73] killing process 647227 as it is an old scheduled stop
	I1201 21:59:16.911034  647301 mustload.go:66] Loading cluster: scheduled-stop-566549
	I1201 21:59:16.911499  647301 config.go:182] Loaded profile config "scheduled-stop-566549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 21:59:16.911621  647301 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/config.json ...
	I1201 21:59:16.911832  647301 mustload.go:66] Loading cluster: scheduled-stop-566549
	I1201 21:59:16.911996  647301 config.go:182] Loaded profile config "scheduled-stop-566549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1201 21:59:16.919631  486002 retry.go:31] will retry after 59.705µs: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.919797  486002 retry.go:31] will retry after 211.11µs: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.920949  486002 retry.go:31] will retry after 128.28µs: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.922040  486002 retry.go:31] will retry after 369.647µs: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.923230  486002 retry.go:31] will retry after 758.082µs: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.924328  486002 retry.go:31] will retry after 1.123065ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.926464  486002 retry.go:31] will retry after 603.229µs: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.927596  486002 retry.go:31] will retry after 1.759902ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.929787  486002 retry.go:31] will retry after 1.56647ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.931936  486002 retry.go:31] will retry after 5.307051ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.938163  486002 retry.go:31] will retry after 7.661229ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.946420  486002 retry.go:31] will retry after 12.792189ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.959655  486002 retry.go:31] will retry after 11.115955ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.971877  486002 retry.go:31] will retry after 13.473377ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:16.985841  486002 retry.go:31] will retry after 31.840654ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
I1201 21:59:17.018106  486002 retry.go:31] will retry after 47.997768ms: open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-566549 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-566549 -n scheduled-stop-566549
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-566549
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-566549 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1201 21:59:42.922977  647663 out.go:360] Setting OutFile to fd 1 ...
	I1201 21:59:42.923283  647663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:59:42.923321  647663 out.go:374] Setting ErrFile to fd 2...
	I1201 21:59:42.923349  647663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 21:59:42.923681  647663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-482752/.minikube/bin
	I1201 21:59:42.924001  647663 out.go:368] Setting JSON to false
	I1201 21:59:42.924148  647663 mustload.go:66] Loading cluster: scheduled-stop-566549
	I1201 21:59:42.924567  647663 config.go:182] Loaded profile config "scheduled-stop-566549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 21:59:42.924687  647663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/scheduled-stop-566549/config.json ...
	I1201 21:59:42.924920  647663 mustload.go:66] Loading cluster: scheduled-stop-566549
	I1201 21:59:42.925082  647663 config.go:182] Loaded profile config "scheduled-stop-566549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-566549
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-566549: exit status 7 (88.307329ms)

                                                
                                                
-- stdout --
	scheduled-stop-566549
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-566549 -n scheduled-stop-566549
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-566549 -n scheduled-stop-566549: exit status 7 (73.024127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-566549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-566549
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-566549: (4.903402614s)
--- PASS: TestScheduledStopUnix (107.16s)

                                                
                                    
x
+
TestInsufficientStorage (13s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-641029 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1201 22:00:34.916068  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-641029 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.409420443s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8e0757d0-cf4c-4b50-9c62-f8816838db56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-641029] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac13040a-53d7-439f-8053-9f3e54419da3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"98c0836d-8fbf-4fd1-b08d-bd59df788326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"398d2317-49f4-40d3-b474-905c0c5446f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig"}}
	{"specversion":"1.0","id":"45af398d-a89c-4874-b178-d73e7ab7012b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube"}}
	{"specversion":"1.0","id":"215f35e5-814a-4ced-9152-e3be79ee0b8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"815aa907-5c47-4720-978f-253781feafbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1eb1a2af-2732-4af8-a57c-d677acf8a2d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fdc73f07-cdb4-4183-9996-ab5491a1df1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"19fe87a6-1570-446a-97ab-7fa3deb74f4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"21837261-a22a-428c-8ee5-e2f26c50a54f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"943b5971-1559-46b5-a188-94b66117a055","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-641029\" primary control-plane node in \"insufficient-storage-641029\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b55c4e2d-b737-4de5-8e98-09f23738ddaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d930d57-1649-40d2-998a-1cdb957d0caf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"647c24dd-8db9-4b2c-ae12-6a5b43c76579","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-641029 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-641029 --output=json --layout=cluster: exit status 7 (305.904294ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-641029","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-641029","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1201 22:00:43.695954  649379 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-641029" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-641029 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-641029 --output=json --layout=cluster: exit status 7 (304.483877ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-641029","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-641029","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1201 22:00:44.000328  649447 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-641029" does not appear in /home/jenkins/minikube-integration/21997-482752/kubeconfig
	E1201 22:00:44.011827  649447 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/insufficient-storage-641029/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-641029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-641029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-641029: (1.982768715s)
--- PASS: TestInsufficientStorage (13.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1948923876 start -p running-upgrade-976949 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1201 22:08:37.995290  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1948923876 start -p running-upgrade-976949 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.031449518s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-976949 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-976949 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.721330022s)
helpers_test.go:175: Cleaning up "running-upgrade-976949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-976949
E1201 22:09:15.950487  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-976949: (2.132766109s)
--- PASS: TestRunningBinaryUpgrade (61.72s)

                                                
                                    
x
+
TestMissingContainerUpgrade (139.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2443416176 start -p missing-upgrade-152595 --memory=3072 --driver=docker  --container-runtime=crio
E1201 22:01:02.449774  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2443416176 start -p missing-upgrade-152595 --memory=3072 --driver=docker  --container-runtime=crio: (1m15.646336646s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-152595
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-152595
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-152595 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-152595 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.018061035s)
helpers_test.go:175: Cleaning up "missing-upgrade-152595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-152595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-152595: (3.009282401s)
--- PASS: TestMissingContainerUpgrade (139.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-516822 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-516822 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (95.943081ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-516822] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-482752/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-482752/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-516822 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-516822 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.989299588s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-516822 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-516822 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-516822 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.840443883s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-516822 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-516822 status -o json: exit status 2 (337.355414ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-516822","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-516822
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-516822: (2.113430568s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-516822 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-516822 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.532748577s)
--- PASS: TestNoKubernetes/serial/Start (9.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-482752/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-516822 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-516822 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.970007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-516822
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-516822: (1.42368398s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-516822 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-516822 --driver=docker  --container-runtime=crio: (7.927985565s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-516822 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-516822 "sudo systemctl is-active --quiet service kubelet": exit status 1 (343.685302ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3748462404 start -p stopped-upgrade-952426 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3748462404 start -p stopped-upgrade-952426 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.890760955s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3748462404 -p stopped-upgrade-952426 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3748462404 -p stopped-upgrade-952426 stop: (1.250131675s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-952426 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1201 22:04:05.522252  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 22:05:34.915718  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 22:06:02.450720  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 22:07:52.876439  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-074555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-952426 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.951945984s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (304.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-952426
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-952426: (1.755889422s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.76s)

                                                
                                    
x
+
TestPause/serial/Start (83.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-188533 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1201 22:10:34.916395  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/addons-947185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-188533 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m23.783491418s)
--- PASS: TestPause/serial/Start (83.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-188533 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1201 22:11:02.450394  486002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-482752/.minikube/profiles/functional-198694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-188533 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.349779819s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.37s)

                                                
                                    

Test skip (35/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.16
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.47
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1201 20:37:56.659207  486002 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1201 20:37:56.769535  486002 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
W1201 20:37:56.816229  486002 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.47s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-074980 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-074980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-074980
--- SKIP: TestDownloadOnlyKic (0.47s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard